Apr 30 12:36:20.165881 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 30 12:36:20.165926 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Apr 29 22:28:35 -00 2025 Apr 30 12:36:20.165950 kernel: KASLR disabled due to lack of seed Apr 30 12:36:20.165967 kernel: efi: EFI v2.7 by EDK II Apr 30 12:36:20.165982 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Apr 30 12:36:20.165997 kernel: secureboot: Secure boot disabled Apr 30 12:36:20.166014 kernel: ACPI: Early table checksum verification disabled Apr 30 12:36:20.166030 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 30 12:36:20.170082 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 12:36:20.170126 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 12:36:20.170155 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Apr 30 12:36:20.170172 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 12:36:20.170188 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 30 12:36:20.170204 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 30 12:36:20.170222 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 30 12:36:20.170244 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 12:36:20.170261 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 30 12:36:20.170278 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 30 12:36:20.170294 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 30 12:36:20.170311 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 30 12:36:20.170327 kernel: printk: bootconsole [uart0] enabled Apr 30 12:36:20.170344 kernel: NUMA: Failed to initialise from firmware Apr 30 12:36:20.170362 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 12:36:20.170379 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 30 12:36:20.170396 kernel: Zone ranges: Apr 30 12:36:20.170414 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 12:36:20.170436 kernel: DMA32 empty Apr 30 12:36:20.170454 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 30 12:36:20.170471 kernel: Movable zone start for each node Apr 30 12:36:20.170488 kernel: Early memory node ranges Apr 30 12:36:20.170507 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 30 12:36:20.170523 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 30 12:36:20.170541 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 30 12:36:20.170557 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 30 12:36:20.170575 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 30 12:36:20.170592 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 30 12:36:20.170610 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 30 12:36:20.170627 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 30 12:36:20.170649 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 12:36:20.170668 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 30 12:36:20.170693 kernel: psci: probing for conduit method from ACPI. Apr 30 12:36:20.170711 kernel: psci: PSCIv1.0 detected in firmware. Apr 30 12:36:20.170729 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 12:36:20.170751 kernel: psci: Trusted OS migration not required Apr 30 12:36:20.170769 kernel: psci: SMC Calling Convention v1.1 Apr 30 12:36:20.170788 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 12:36:20.170806 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 12:36:20.170826 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 12:36:20.170844 kernel: Detected PIPT I-cache on CPU0 Apr 30 12:36:20.170861 kernel: CPU features: detected: GIC system register CPU interface Apr 30 12:36:20.170879 kernel: CPU features: detected: Spectre-v2 Apr 30 12:36:20.170896 kernel: CPU features: detected: Spectre-v3a Apr 30 12:36:20.170915 kernel: CPU features: detected: Spectre-BHB Apr 30 12:36:20.170933 kernel: CPU features: detected: ARM erratum 1742098 Apr 30 12:36:20.170950 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 30 12:36:20.170974 kernel: alternatives: applying boot alternatives Apr 30 12:36:20.170993 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:36:20.171012 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 12:36:20.171029 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 12:36:20.171141 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 12:36:20.171166 kernel: Fallback order for Node 0: 0 Apr 30 12:36:20.171185 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 30 12:36:20.171203 kernel: Policy zone: Normal Apr 30 12:36:20.171221 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 12:36:20.171239 kernel: software IO TLB: area num 2. Apr 30 12:36:20.171266 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 30 12:36:20.171285 kernel: Memory: 3821176K/4030464K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 209288K reserved, 0K cma-reserved) Apr 30 12:36:20.171305 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 12:36:20.171323 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 12:36:20.171341 kernel: rcu: RCU event tracing is enabled. Apr 30 12:36:20.171361 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 12:36:20.171378 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 12:36:20.171397 kernel: Tracing variant of Tasks RCU enabled. Apr 30 12:36:20.171414 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 12:36:20.171431 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 12:36:20.171448 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 12:36:20.171469 kernel: GICv3: 96 SPIs implemented Apr 30 12:36:20.171487 kernel: GICv3: 0 Extended SPIs implemented Apr 30 12:36:20.171503 kernel: Root IRQ handler: gic_handle_irq Apr 30 12:36:20.171520 kernel: GICv3: GICv3 features: 16 PPIs Apr 30 12:36:20.171537 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 30 12:36:20.171554 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 30 12:36:20.171571 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 12:36:20.171588 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 30 12:36:20.171605 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 30 12:36:20.171622 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 30 12:36:20.171639 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 30 12:36:20.171656 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 12:36:20.171678 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 30 12:36:20.171695 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 30 12:36:20.171713 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 30 12:36:20.171730 kernel: Console: colour dummy device 80x25 Apr 30 12:36:20.171748 kernel: printk: console [tty1] enabled Apr 30 12:36:20.171766 kernel: ACPI: Core revision 20230628 Apr 30 12:36:20.171783 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 30 12:36:20.171800 kernel: pid_max: default: 32768 minimum: 301 Apr 30 12:36:20.171818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 12:36:20.171835 kernel: landlock: Up and running. Apr 30 12:36:20.171857 kernel: SELinux: Initializing. Apr 30 12:36:20.171874 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:36:20.171892 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 12:36:20.171909 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:36:20.171927 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 12:36:20.171944 kernel: rcu: Hierarchical SRCU implementation. Apr 30 12:36:20.171962 kernel: rcu: Max phase no-delay instances is 400. Apr 30 12:36:20.171979 kernel: Platform MSI: ITS@0x10080000 domain created Apr 30 12:36:20.172000 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 30 12:36:20.172018 kernel: Remapping and enabling EFI services. Apr 30 12:36:20.172035 kernel: smp: Bringing up secondary CPUs ... Apr 30 12:36:20.174112 kernel: Detected PIPT I-cache on CPU1 Apr 30 12:36:20.174142 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 30 12:36:20.174160 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 30 12:36:20.174178 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 30 12:36:20.174195 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 12:36:20.174213 kernel: SMP: Total of 2 processors activated. Apr 30 12:36:20.174231 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 12:36:20.174260 kernel: CPU features: detected: 32-bit EL1 Support Apr 30 12:36:20.174282 kernel: CPU features: detected: CRC32 instructions Apr 30 12:36:20.174355 kernel: CPU: All CPU(s) started at EL1 Apr 30 12:36:20.174427 kernel: alternatives: applying system-wide alternatives Apr 30 12:36:20.174451 kernel: devtmpfs: initialized Apr 30 12:36:20.174476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 12:36:20.174497 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 12:36:20.174516 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 12:36:20.174535 kernel: SMBIOS 3.0.0 present. Apr 30 12:36:20.174559 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 30 12:36:20.174578 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 12:36:20.174596 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 12:36:20.174614 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 12:36:20.174632 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 12:36:20.174650 kernel: audit: initializing netlink subsys (disabled) Apr 30 12:36:20.174668 kernel: audit: type=2000 audit(0.219:1): state=initialized audit_enabled=0 res=1 Apr 30 12:36:20.174692 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 12:36:20.174710 kernel: cpuidle: using governor menu Apr 30 12:36:20.174728 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 12:36:20.174746 kernel: ASID allocator initialised with 65536 entries Apr 30 12:36:20.174764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 12:36:20.174783 kernel: Serial: AMBA PL011 UART driver Apr 30 12:36:20.174801 kernel: Modules: 17744 pages in range for non-PLT usage Apr 30 12:36:20.174819 kernel: Modules: 509264 pages in range for PLT usage Apr 30 12:36:20.174837 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 12:36:20.174860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 12:36:20.174879 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 12:36:20.174896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 12:36:20.174914 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 12:36:20.174932 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 12:36:20.174950 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 12:36:20.174968 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 12:36:20.174986 kernel: ACPI: Added _OSI(Module Device) Apr 30 12:36:20.175004 kernel: ACPI: Added _OSI(Processor Device) Apr 30 12:36:20.175027 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 12:36:20.175063 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 12:36:20.175087 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 12:36:20.175105 kernel: ACPI: Interpreter enabled Apr 30 12:36:20.175123 kernel: ACPI: Using GIC for interrupt routing Apr 30 12:36:20.175141 kernel: ACPI: MCFG table detected, 1 entries Apr 30 12:36:20.175159 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Apr 30 12:36:20.175483 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 12:36:20.175693 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 12:36:20.175892 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 12:36:20.176491 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Apr 30 12:36:20.176710 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Apr 30 12:36:20.176735 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 30 12:36:20.176753 kernel: acpiphp: Slot [1] registered Apr 30 12:36:20.176771 kernel: acpiphp: Slot [2] registered Apr 30 12:36:20.176789 kernel: acpiphp: Slot [3] registered Apr 30 12:36:20.176813 kernel: acpiphp: Slot [4] registered Apr 30 12:36:20.176832 kernel: acpiphp: Slot [5] registered Apr 30 12:36:20.176849 kernel: acpiphp: Slot [6] registered Apr 30 12:36:20.176867 kernel: acpiphp: Slot [7] registered Apr 30 12:36:20.176885 kernel: acpiphp: Slot [8] registered Apr 30 12:36:20.176902 kernel: acpiphp: Slot [9] registered Apr 30 12:36:20.176920 kernel: acpiphp: Slot [10] registered Apr 30 12:36:20.176938 kernel: acpiphp: Slot [11] registered Apr 30 12:36:20.176956 kernel: acpiphp: Slot [12] registered Apr 30 12:36:20.176974 kernel: acpiphp: Slot [13] registered Apr 30 12:36:20.176996 kernel: acpiphp: Slot [14] registered Apr 30 12:36:20.177014 kernel: acpiphp: Slot [15] registered Apr 30 12:36:20.177032 kernel: acpiphp: Slot [16] registered Apr 30 12:36:20.177067 kernel: acpiphp: Slot [17] registered Apr 30 12:36:20.177088 kernel: acpiphp: Slot [18] registered Apr 30 12:36:20.177107 kernel: acpiphp: Slot [19] registered Apr 30 12:36:20.177125 kernel: acpiphp: Slot [20] registered Apr 30 12:36:20.177143 kernel: acpiphp: Slot [21] registered Apr 30 12:36:20.177161 kernel: acpiphp: Slot [22] registered Apr 30 12:36:20.177208 kernel: acpiphp: Slot [23] registered Apr 30 12:36:20.177228 kernel: acpiphp: Slot [24] registered Apr 30 12:36:20.177247 kernel: acpiphp: Slot [25] registered Apr 30 12:36:20.177264 kernel: acpiphp: Slot [26] registered Apr 30 12:36:20.177282 kernel: acpiphp: Slot [27] registered Apr 30 12:36:20.177300 kernel: acpiphp: Slot [28] registered Apr 30 12:36:20.177318 kernel: acpiphp: Slot [29] registered Apr 30 12:36:20.177336 kernel: acpiphp: Slot [30] registered Apr 30 12:36:20.177354 kernel: acpiphp: Slot [31] registered Apr 30 12:36:20.177372 kernel: PCI host bridge to bus 0000:00 Apr 30 12:36:20.177611 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 30 12:36:20.177821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 12:36:20.178009 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 30 12:36:20.178298 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Apr 30 12:36:20.178540 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 30 12:36:20.178769 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 30 12:36:20.178985 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 30 12:36:20.179228 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 12:36:20.180537 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 30 12:36:20.180784 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 12:36:20.181014 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 12:36:20.181333 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 30 12:36:20.181549 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 30 12:36:20.181766 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 30 12:36:20.181971 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 12:36:20.182275 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Apr 30 12:36:20.182482 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Apr 30 12:36:20.182684 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Apr 30 12:36:20.182883 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Apr 30 12:36:20.183130 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Apr 30 12:36:20.188249 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 30 12:36:20.188458 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 12:36:20.188658 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 30 12:36:20.188684 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 12:36:20.188704 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 12:36:20.188723 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 12:36:20.188741 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 12:36:20.188759 kernel: iommu: Default domain type: Translated Apr 30 12:36:20.188787 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 12:36:20.188806 kernel: efivars: Registered efivars operations Apr 30 12:36:20.188824 kernel: vgaarb: loaded Apr 30 12:36:20.188843 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 12:36:20.188861 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 12:36:20.188879 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 12:36:20.188898 kernel: pnp: PnP ACPI init Apr 30 12:36:20.189189 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 30 12:36:20.189231 kernel: pnp: PnP ACPI: found 1 devices Apr 30 12:36:20.189251 kernel: NET: Registered PF_INET protocol family Apr 30 12:36:20.189272 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 12:36:20.191497 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 12:36:20.191518 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 12:36:20.191538 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 12:36:20.191556 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 12:36:20.191574 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 12:36:20.191593 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:36:20.191620 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 12:36:20.191639 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 12:36:20.191657 kernel: PCI: CLS 0 bytes, default 64 Apr 30 12:36:20.191675 kernel: kvm [1]: HYP mode not available Apr 30 12:36:20.191693 kernel: Initialise system trusted keyrings Apr 30 12:36:20.191712 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 12:36:20.191770 kernel: Key type asymmetric registered Apr 30 12:36:20.191793 kernel: Asymmetric key parser 'x509' registered Apr 30 12:36:20.191812 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 12:36:20.191837 kernel: io scheduler mq-deadline registered Apr 30 12:36:20.191856 kernel: io scheduler kyber registered Apr 30 12:36:20.191874 kernel: io scheduler bfq registered Apr 30 12:36:20.194443 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 30 12:36:20.194491 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 12:36:20.194510 kernel: ACPI: button: Power Button [PWRB] Apr 30 12:36:20.194529 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 30 12:36:20.194547 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 12:36:20.194575 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 12:36:20.194595 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 12:36:20.194827 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 30 12:36:20.194856 kernel: printk: console [ttyS0] disabled Apr 30 12:36:20.194876 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 30 12:36:20.194895 kernel: printk: console [ttyS0] enabled Apr 30 12:36:20.194914 kernel: printk: bootconsole [uart0] disabled Apr 30 12:36:20.194933 kernel: thunder_xcv, ver 1.0 Apr 30 12:36:20.194951 kernel: thunder_bgx, ver 1.0 Apr 30 12:36:20.194969 kernel: nicpf, ver 1.0 Apr 30 12:36:20.194994 kernel: nicvf, ver 1.0 Apr 30 12:36:20.195256 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 12:36:20.195472 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T12:36:19 UTC (1746016579) Apr 30 12:36:20.195501 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 12:36:20.195521 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 30 12:36:20.195541 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 12:36:20.195560 kernel: watchdog: Hard watchdog permanently disabled Apr 30 12:36:20.195591 kernel: NET: Registered PF_INET6 protocol family Apr 30 12:36:20.195611 kernel: Segment Routing with IPv6 Apr 30 12:36:20.195630 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 12:36:20.195650 kernel: NET: Registered PF_PACKET protocol family Apr 30 12:36:20.195668 kernel: Key type dns_resolver registered Apr 30 12:36:20.195691 kernel: registered taskstats version 1 Apr 30 12:36:20.195710 kernel: Loading compiled-in X.509 certificates Apr 30 12:36:20.195729 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 4e3d8be893bce81adbd52ab54fa98214a1a14a2e' Apr 30 12:36:20.195748 kernel: Key type .fscrypt registered Apr 30 12:36:20.195766 kernel: Key type fscrypt-provisioning registered Apr 30 12:36:20.195790 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 12:36:20.195809 kernel: ima: Allocated hash algorithm: sha1 Apr 30 12:36:20.195827 kernel: ima: No architecture policies found Apr 30 12:36:20.195846 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 12:36:20.195864 kernel: clk: Disabling unused clocks Apr 30 12:36:20.195882 kernel: Freeing unused kernel memory: 38336K Apr 30 12:36:20.195901 kernel: Run /init as init process Apr 30 12:36:20.195920 kernel: with arguments: Apr 30 12:36:20.195938 kernel: /init Apr 30 12:36:20.195962 kernel: with environment: Apr 30 12:36:20.195980 kernel: HOME=/ Apr 30 12:36:20.195999 kernel: TERM=linux Apr 30 12:36:20.196018 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 12:36:20.196039 systemd[1]: Successfully made /usr/ read-only. Apr 30 12:36:20.197938 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:36:20.198267 systemd[1]: Detected virtualization amazon. Apr 30 12:36:20.198298 systemd[1]: Detected architecture arm64. Apr 30 12:36:20.198318 systemd[1]: Running in initrd. Apr 30 12:36:20.198337 systemd[1]: No hostname configured, using default hostname. Apr 30 12:36:20.198358 systemd[1]: Hostname set to . Apr 30 12:36:20.198377 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:36:20.198397 systemd[1]: Queued start job for default target initrd.target. Apr 30 12:36:20.198417 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:36:20.198437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:36:20.198457 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 12:36:20.198483 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:36:20.198503 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 12:36:20.198525 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 12:36:20.198547 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 12:36:20.198568 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 12:36:20.198587 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:36:20.198612 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:36:20.198632 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:36:20.198652 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:36:20.198672 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:36:20.198691 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:36:20.198711 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:36:20.198730 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:36:20.198750 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 12:36:20.198770 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 12:36:20.198794 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:36:20.198814 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:36:20.198834 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:36:20.198853 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:36:20.198873 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 12:36:20.198892 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:36:20.198912 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 12:36:20.198932 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 12:36:20.198956 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:36:20.198976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:36:20.198995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:36:20.199015 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 12:36:20.199035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:36:20.200493 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 12:36:20.200574 systemd-journald[251]: Collecting audit messages is disabled. Apr 30 12:36:20.200617 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 12:36:20.200638 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 12:36:20.200663 kernel: Bridge firewalling registered Apr 30 12:36:20.200683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:36:20.200703 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:36:20.200724 systemd-journald[251]: Journal started Apr 30 12:36:20.200760 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2dcd826a369607ba60e936a659a31f) is 8M, max 75.3M, 67.3M free. Apr 30 12:36:20.143969 systemd-modules-load[252]: Inserted module 'overlay' Apr 30 12:36:20.187445 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 30 12:36:20.216290 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:36:20.220910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:36:20.228159 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:36:20.232972 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 12:36:20.252137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:36:20.274472 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 12:36:20.281381 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:36:20.289319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:36:20.298641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:36:20.325195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:36:20.332080 dracut-cmdline[282]: dracut-dracut-053 Apr 30 12:36:20.335590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:36:20.343949 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=984055eb0c340c9cf0fb51b368030ed72e75b7f2e065edc13766888ef0b42074 Apr 30 12:36:20.364385 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:36:20.441531 systemd-resolved[301]: Positive Trust Anchors: Apr 30 12:36:20.441567 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:36:20.441629 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:36:20.525089 kernel: SCSI subsystem initialized Apr 30 12:36:20.535073 kernel: Loading iSCSI transport class v2.0-870. Apr 30 12:36:20.545089 kernel: iscsi: registered transport (tcp) Apr 30 12:36:20.568088 kernel: iscsi: registered transport (qla4xxx) Apr 30 12:36:20.568159 kernel: QLogic iSCSI HBA Driver Apr 30 12:36:20.664075 kernel: random: crng init done Apr 30 12:36:20.664451 systemd-resolved[301]: Defaulting to hostname 'linux'. Apr 30 12:36:20.667884 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:36:20.672104 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:36:20.693717 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 12:36:20.707358 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 12:36:20.739119 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 12:36:20.739195 kernel: device-mapper: uevent: version 1.0.3 Apr 30 12:36:20.739223 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 12:36:20.805106 kernel: raid6: neonx8 gen() 6599 MB/s Apr 30 12:36:20.822079 kernel: raid6: neonx4 gen() 6609 MB/s Apr 30 12:36:20.839077 kernel: raid6: neonx2 gen() 5493 MB/s Apr 30 12:36:20.856077 kernel: raid6: neonx1 gen() 3971 MB/s Apr 30 12:36:20.873077 kernel: raid6: int64x8 gen() 3632 MB/s Apr 30 12:36:20.890079 kernel: raid6: int64x4 gen() 3716 MB/s Apr 30 12:36:20.907078 kernel: raid6: int64x2 gen() 3609 MB/s Apr 30 12:36:20.924911 kernel: raid6: int64x1 gen() 2768 MB/s Apr 30 12:36:20.924943 kernel: raid6: using algorithm neonx4 gen() 6609 MB/s Apr 30 12:36:20.942905 kernel: raid6: .... xor() 4905 MB/s, rmw enabled Apr 30 12:36:20.942942 kernel: raid6: using neon recovery algorithm Apr 30 12:36:20.950083 kernel: xor: measuring software checksum speed Apr 30 12:36:20.951081 kernel: 8regs : 11926 MB/sec Apr 30 12:36:20.953339 kernel: 32regs : 11927 MB/sec Apr 30 12:36:20.953371 kernel: arm64_neon : 9573 MB/sec Apr 30 12:36:20.953396 kernel: xor: using function: 32regs (11927 MB/sec) Apr 30 12:36:21.036091 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 12:36:21.055261 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:36:21.064343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:36:21.112154 systemd-udevd[473]: Using default interface naming scheme 'v255'. Apr 30 12:36:21.123197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:36:21.137356 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 12:36:21.170917 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Apr 30 12:36:21.225630 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:36:21.235479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:36:21.361120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:36:21.373336 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 12:36:21.419918 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 12:36:21.425391 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:36:21.427875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:36:21.430202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:36:21.453347 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 12:36:21.490003 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:36:21.567752 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 12:36:21.567817 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 30 12:36:21.592535 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 12:36:21.592796 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 12:36:21.593035 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 12:36:21.593101 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 12:36:21.593451 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:3d:d1:2a:cf:e5 Apr 30 12:36:21.599075 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 12:36:21.602587 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:36:21.604724 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:36:21.606461 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:36:21.626939 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 12:36:21.626984 kernel: GPT:9289727 != 16777215 Apr 30 12:36:21.627010 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 12:36:21.627034 kernel: GPT:9289727 != 16777215 Apr 30 12:36:21.627078 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 12:36:21.627106 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:36:21.607392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:36:21.608318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:36:21.611138 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:36:21.645335 (udev-worker)[540]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:36:21.649423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:36:21.676956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:36:21.686588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 12:36:21.731211 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:36:21.769339 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (532) Apr 30 12:36:21.782096 kernel: BTRFS: device fsid 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (531) Apr 30 12:36:21.888479 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 12:36:21.929810 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 12:36:21.967971 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 12:36:21.983907 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 12:36:22.007680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 12:36:22.021305 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 12:36:22.035120 disk-uuid[663]: Primary Header is updated. Apr 30 12:36:22.035120 disk-uuid[663]: Secondary Entries is updated. Apr 30 12:36:22.035120 disk-uuid[663]: Secondary Header is updated. Apr 30 12:36:22.045082 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:36:23.064333 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 12:36:23.065634 disk-uuid[665]: The operation has completed successfully. Apr 30 12:36:23.254731 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 12:36:23.256634 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 12:36:23.344289 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 12:36:23.352950 sh[925]: Success Apr 30 12:36:23.377118 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 12:36:23.492627 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 12:36:23.516280 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 12:36:23.521615 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 12:36:23.558153 kernel: BTRFS info (device dm-0): first mount of filesystem 8f86a166-b3d6-49f7-a49d-597eaeb9f5e5 Apr 30 12:36:23.558217 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:36:23.558243 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 12:36:23.559880 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 12:36:23.561131 kernel: BTRFS info (device dm-0): using free space tree Apr 30 12:36:23.665091 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 12:36:23.707680 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 12:36:23.711112 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 12:36:23.727390 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 12:36:23.736338 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 12:36:23.776042 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:36:23.776141 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:36:23.777610 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:36:23.786091 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:36:23.795208 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:36:23.800598 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 12:36:23.813335 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 12:36:23.916595 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:36:23.940363 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:36:23.993352 systemd-networkd[1115]: lo: Link UP Apr 30 12:36:23.993374 systemd-networkd[1115]: lo: Gained carrier Apr 30 12:36:23.998928 systemd-networkd[1115]: Enumeration completed Apr 30 12:36:24.000505 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:36:24.000799 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:36:24.000806 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:36:24.011847 systemd[1]: Reached target network.target - Network. Apr 30 12:36:24.016190 systemd-networkd[1115]: eth0: Link UP Apr 30 12:36:24.016203 systemd-networkd[1115]: eth0: Gained carrier Apr 30 12:36:24.016219 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:36:24.035146 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.17.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 12:36:24.182988 ignition[1035]: Ignition 2.20.0 Apr 30 12:36:24.183018 ignition[1035]: Stage: fetch-offline Apr 30 12:36:24.186927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:36:24.183484 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:36:24.183509 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:36:24.183968 ignition[1035]: Ignition finished successfully Apr 30 12:36:24.213366 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 12:36:24.235446 ignition[1124]: Ignition 2.20.0 Apr 30 12:36:24.235477 ignition[1124]: Stage: fetch Apr 30 12:36:24.236737 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:36:24.236765 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:36:24.236951 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:36:24.258343 ignition[1124]: PUT result: OK Apr 30 12:36:24.261221 ignition[1124]: parsed url from cmdline: "" Apr 30 12:36:24.261238 ignition[1124]: no config URL provided Apr 30 12:36:24.261256 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 12:36:24.261284 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Apr 30 12:36:24.261316 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:36:24.263398 ignition[1124]: PUT result: OK Apr 30 12:36:24.265461 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 12:36:24.272786 ignition[1124]: GET result: OK Apr 30 12:36:24.273075 ignition[1124]: parsing config with SHA512: e03f3e288d91c8fa911a6c37e8d3288a6d0d00ea781b27f82b87b35df8412ce3cc0a0ced038413b1aae028abfe47e51d6b670c141c45a410a380e02a84242e47 Apr 30 12:36:24.281542 unknown[1124]: fetched base config from "system" Apr 30 12:36:24.282266 ignition[1124]: fetch: fetch complete Apr 30 12:36:24.281575 unknown[1124]: fetched base config from "system" Apr 30 12:36:24.282280 ignition[1124]: fetch: fetch passed Apr 30 12:36:24.281590 unknown[1124]: fetched user config from "aws" Apr 30 12:36:24.282361 ignition[1124]: Ignition finished successfully Apr 30 12:36:24.296528 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 12:36:24.307351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 12:36:24.343464 ignition[1130]: Ignition 2.20.0 Apr 30 12:36:24.343493 ignition[1130]: Stage: kargs Apr 30 12:36:24.345081 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:36:24.345137 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:36:24.345335 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:36:24.348574 ignition[1130]: PUT result: OK Apr 30 12:36:24.358106 ignition[1130]: kargs: kargs passed Apr 30 12:36:24.358241 ignition[1130]: Ignition finished successfully Apr 30 12:36:24.363301 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 12:36:24.376312 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 12:36:24.400020 ignition[1136]: Ignition 2.20.0 Apr 30 12:36:24.400074 ignition[1136]: Stage: disks Apr 30 12:36:24.401710 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Apr 30 12:36:24.401739 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:36:24.402596 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:36:24.407141 ignition[1136]: PUT result: OK Apr 30 12:36:24.414697 ignition[1136]: disks: disks passed Apr 30 12:36:24.414843 ignition[1136]: Ignition finished successfully Apr 30 12:36:24.417646 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 12:36:24.424648 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 12:36:24.426824 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 12:36:24.429133 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:36:24.431387 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:36:24.434090 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:36:24.458373 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 12:36:24.498228 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 12:36:24.503860 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 12:36:24.521404 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 12:36:24.606085 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 597557b0-8ae6-4a5a-8e98-f3f884fcfe65 r/w with ordered data mode. Quota mode: none. Apr 30 12:36:24.606848 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 12:36:24.610808 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 12:36:24.628202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:36:24.633249 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 12:36:24.644584 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 12:36:24.647571 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 12:36:24.647622 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:36:24.659951 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 12:36:24.668396 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 12:36:24.684112 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1163) Apr 30 12:36:24.689040 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:36:24.689122 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:36:24.689149 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:36:24.702087 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:36:24.705258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:36:25.045317 initrd-setup-root[1187]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 12:36:25.069551 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory Apr 30 12:36:25.078075 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 12:36:25.100487 initrd-setup-root[1208]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 12:36:25.395341 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 12:36:25.408195 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 12:36:25.414348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 12:36:25.429154 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 12:36:25.433037 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:36:25.480127 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 12:36:25.486089 ignition[1275]: INFO : Ignition 2.20.0 Apr 30 12:36:25.486089 ignition[1275]: INFO : Stage: mount Apr 30 12:36:25.486089 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:36:25.486089 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:36:25.493919 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:36:25.493919 ignition[1275]: INFO : PUT result: OK Apr 30 12:36:25.499972 ignition[1275]: INFO : mount: mount passed Apr 30 12:36:25.501559 ignition[1275]: INFO : Ignition finished successfully Apr 30 12:36:25.508183 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 12:36:25.525262 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 12:36:25.536171 systemd-networkd[1115]: eth0: Gained IPv6LL Apr 30 12:36:25.614500 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 12:36:25.642460 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1287) Apr 30 12:36:25.642533 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8d8cccbd-965f-4336-afa9-06a510e76633 Apr 30 12:36:25.642560 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 12:36:25.645144 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 12:36:25.651215 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 12:36:25.653740 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 12:36:25.688817 ignition[1304]: INFO : Ignition 2.20.0 Apr 30 12:36:25.688817 ignition[1304]: INFO : Stage: files Apr 30 12:36:25.692320 ignition[1304]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:36:25.692320 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:36:25.696502 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:36:25.699210 ignition[1304]: INFO : PUT result: OK Apr 30 12:36:25.703706 ignition[1304]: DEBUG : files: compiled without relabeling support, skipping Apr 30 12:36:25.718638 ignition[1304]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 12:36:25.718638 ignition[1304]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 12:36:25.761780 ignition[1304]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 12:36:25.764423 ignition[1304]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 12:36:25.767570 unknown[1304]: wrote ssh authorized keys file for user: core Apr 30 12:36:25.771264 ignition[1304]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 12:36:25.790093 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 12:36:25.793859 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 12:36:25.889309 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 12:36:26.078099 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 12:36:26.078099 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:36:26.085028 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 12:36:26.548511 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 12:36:26.663604 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 12:36:26.663604 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 12:36:26.670450 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 12:36:26.673694 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:36:26.677573 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 12:36:26.677573 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:36:26.677573 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 12:36:26.677573 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:36:26.691769 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 12:36:26.691769 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:36:26.691769 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 12:36:26.691769 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:36:26.691769 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:36:26.691769 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:36:26.691769 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 12:36:26.993818 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 12:36:27.337980 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 12:36:27.337980 ignition[1304]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 12:36:27.344422 ignition[1304]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:36:27.344422 ignition[1304]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 12:36:27.344422 ignition[1304]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 12:36:27.344422 ignition[1304]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 12:36:27.344422 ignition[1304]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 12:36:27.344422 ignition[1304]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:36:27.344422 ignition[1304]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 12:36:27.344422 ignition[1304]: INFO : files: files passed Apr 30 12:36:27.344422 ignition[1304]: INFO : Ignition finished successfully Apr 30 12:36:27.370992 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 12:36:27.384434 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 12:36:27.397364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 12:36:27.402366 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 12:36:27.404193 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 12:36:27.422227 initrd-setup-root-after-ignition[1333]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:36:27.422227 initrd-setup-root-after-ignition[1333]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:36:27.430339 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 12:36:27.435978 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:36:27.441905 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 12:36:27.460391 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 12:36:27.504374 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 12:36:27.505579 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 12:36:27.512913 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 12:36:27.517377 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 12:36:27.521367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 12:36:27.534380 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 12:36:27.563585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:36:27.579880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 12:36:27.603331 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:36:27.608485 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:36:27.613171 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 12:36:27.615077 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 12:36:27.615311 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 12:36:27.623330 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 12:36:27.625396 systemd[1]: Stopped target basic.target - Basic System. Apr 30 12:36:27.627321 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 12:36:27.633846 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 12:36:27.636404 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 12:36:27.644512 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 12:36:27.646666 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 12:36:27.649683 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 12:36:27.657585 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 12:36:27.659778 systemd[1]: Stopped target swap.target - Swaps. Apr 30 12:36:27.661801 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 12:36:27.662033 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 12:36:27.670827 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:36:27.673033 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:36:27.675455 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 12:36:27.681277 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:36:27.687210 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 12:36:27.687442 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 12:36:27.689901 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 12:36:27.690150 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 12:36:27.692777 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 12:36:27.692973 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 12:36:27.714259 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 12:36:27.716669 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 12:36:27.717180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:36:27.735489 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 12:36:27.740037 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 12:36:27.741665 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:36:27.748151 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 12:36:27.749924 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 12:36:27.773345 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 12:36:27.775222 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 12:36:27.783416 ignition[1357]: INFO : Ignition 2.20.0 Apr 30 12:36:27.783416 ignition[1357]: INFO : Stage: umount Apr 30 12:36:27.786833 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 12:36:27.786833 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 12:36:27.791150 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 12:36:27.794071 ignition[1357]: INFO : PUT result: OK Apr 30 12:36:27.800452 ignition[1357]: INFO : umount: umount passed Apr 30 12:36:27.800452 ignition[1357]: INFO : Ignition finished successfully Apr 30 12:36:27.805890 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 12:36:27.807675 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 12:36:27.815198 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 12:36:27.818234 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 12:36:27.818415 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 12:36:27.822465 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 12:36:27.824688 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 12:36:27.829139 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 12:36:27.829261 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 12:36:27.831263 systemd[1]: Stopped target network.target - Network. Apr 30 12:36:27.832963 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 12:36:27.833067 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 12:36:27.835391 systemd[1]: Stopped target paths.target - Path Units. Apr 30 12:36:27.837108 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 12:36:27.851968 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:36:27.854864 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 12:36:27.860612 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 12:36:27.862531 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 12:36:27.862615 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 12:36:27.864568 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 12:36:27.864635 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 12:36:27.866606 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 12:36:27.866694 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 12:36:27.868651 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 12:36:27.868732 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 12:36:27.871242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 12:36:27.878241 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 12:36:27.898023 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 12:36:27.899325 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 12:36:27.903268 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 12:36:27.903463 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 12:36:27.915922 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 12:36:27.918684 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 12:36:27.920677 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 12:36:27.925856 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 12:36:27.927757 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 12:36:27.927885 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:36:27.931359 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 12:36:27.932958 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 12:36:27.948114 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 12:36:27.948817 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 12:36:27.949492 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 12:36:27.950509 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:36:27.950607 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:36:27.955575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 12:36:27.955752 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 12:36:27.959581 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 12:36:27.959667 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:36:27.962325 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:36:27.982322 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 12:36:27.982458 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:36:28.007915 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 12:36:28.008259 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:36:28.014801 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 12:36:28.014915 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 12:36:28.019756 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 12:36:28.019829 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:36:28.022062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 12:36:28.022160 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 12:36:28.024737 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 12:36:28.024821 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 12:36:28.041737 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 12:36:28.041844 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 12:36:28.055321 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 12:36:28.058197 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 12:36:28.058316 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:36:28.061216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 12:36:28.061300 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:36:28.076677 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 12:36:28.076967 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 12:36:28.087868 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 12:36:28.088101 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 12:36:28.105499 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 12:36:28.107099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 12:36:28.115571 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 12:36:28.127767 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 12:36:28.144351 systemd[1]: Switching root. Apr 30 12:36:28.186082 systemd-journald[251]: Journal stopped Apr 30 12:36:30.671475 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 30 12:36:30.671606 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 12:36:30.671649 kernel: SELinux: policy capability open_perms=1 Apr 30 12:36:30.671680 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 12:36:30.671709 kernel: SELinux: policy capability always_check_network=0 Apr 30 12:36:30.671737 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 12:36:30.671767 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 12:36:30.671801 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 12:36:30.671840 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 12:36:30.671869 kernel: audit: type=1403 audit(1746016588.677:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 12:36:30.671908 systemd[1]: Successfully loaded SELinux policy in 72.001ms. Apr 30 12:36:30.672257 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.546ms. Apr 30 12:36:30.672299 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 12:36:30.672330 systemd[1]: Detected virtualization amazon. Apr 30 12:36:30.672360 systemd[1]: Detected architecture arm64. Apr 30 12:36:30.672392 systemd[1]: Detected first boot. Apr 30 12:36:30.672427 systemd[1]: Initializing machine ID from VM UUID. Apr 30 12:36:30.672467 zram_generator::config[1401]: No configuration found. Apr 30 12:36:30.672498 kernel: NET: Registered PF_VSOCK protocol family Apr 30 12:36:30.672529 systemd[1]: Populated /etc with preset unit settings. Apr 30 12:36:30.672561 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 12:36:30.672590 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 12:36:30.672618 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 12:36:30.672650 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 12:36:30.672685 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 12:36:30.672717 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 12:36:30.672747 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 12:36:30.672778 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 12:36:30.672807 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 12:36:30.672837 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 12:36:30.672866 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 12:36:30.672897 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 12:36:30.672932 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 12:36:30.672961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 12:36:30.673001 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 12:36:30.673031 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 12:36:30.673083 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 12:36:30.673118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 12:36:30.673168 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 12:36:30.673204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 12:36:30.673278 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 12:36:30.673319 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 12:36:30.673351 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 12:36:30.673382 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 12:36:30.673412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 12:36:30.673599 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 12:36:30.673636 systemd[1]: Reached target slices.target - Slice Units. Apr 30 12:36:30.673669 systemd[1]: Reached target swap.target - Swaps. Apr 30 12:36:30.673698 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 12:36:30.673734 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 12:36:30.673763 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 12:36:30.673791 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 12:36:30.673821 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 12:36:30.673850 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 12:36:30.673878 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 12:36:30.673910 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 12:36:30.674146 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 12:36:30.674179 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 12:36:30.674216 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 12:36:30.674245 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 12:36:30.674435 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 12:36:30.674474 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 12:36:30.674515 systemd[1]: Reached target machines.target - Containers. Apr 30 12:36:30.674545 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 12:36:30.677188 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:36:30.677439 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 12:36:30.677472 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 12:36:30.677511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:36:30.677544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:36:30.677574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:36:30.677605 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 12:36:30.677634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:36:30.677663 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 12:36:30.677694 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 12:36:30.677723 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 12:36:30.677757 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 12:36:30.677790 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 12:36:30.677820 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:36:30.677850 kernel: fuse: init (API version 7.39) Apr 30 12:36:30.677879 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 12:36:30.677908 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 12:36:30.677937 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 12:36:30.677968 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 12:36:30.677995 kernel: loop: module loaded Apr 30 12:36:30.678028 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 12:36:30.678523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 12:36:30.678567 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 12:36:30.678599 systemd[1]: Stopped verity-setup.service. Apr 30 12:36:30.678628 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 12:36:30.678662 kernel: ACPI: bus type drm_connector registered Apr 30 12:36:30.678693 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 12:36:30.678724 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 12:36:30.678752 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 12:36:30.678781 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 12:36:30.678810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 12:36:30.678840 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 12:36:30.678868 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 12:36:30.678902 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 12:36:30.678936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:36:30.678967 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:36:30.678996 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:36:30.679093 systemd-journald[1491]: Collecting audit messages is disabled. Apr 30 12:36:30.679151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:36:30.679181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:36:30.679209 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:36:30.679238 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 12:36:30.679265 systemd-journald[1491]: Journal started Apr 30 12:36:30.679313 systemd-journald[1491]: Runtime Journal (/run/log/journal/ec2dcd826a369607ba60e936a659a31f) is 8M, max 75.3M, 67.3M free. Apr 30 12:36:30.102410 systemd[1]: Queued start job for default target multi-user.target. Apr 30 12:36:30.113411 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 12:36:30.114260 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 12:36:30.683023 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 12:36:30.694848 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 12:36:30.700198 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 12:36:30.703472 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:36:30.703886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:36:30.706914 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 12:36:30.710969 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 12:36:30.726379 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 12:36:30.729889 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 12:36:30.746624 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 12:36:30.756264 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 12:36:30.768342 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 12:36:30.771732 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 12:36:30.771791 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 12:36:30.777933 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 12:36:30.788369 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 12:36:30.800337 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 12:36:30.802506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:36:30.812480 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 12:36:30.821359 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 12:36:30.823715 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:36:30.825810 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 12:36:30.827977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:36:30.831408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:36:30.839376 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 12:36:30.847381 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 12:36:30.853976 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 12:36:30.856739 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 12:36:30.862186 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 12:36:30.894223 systemd-journald[1491]: Time spent on flushing to /var/log/journal/ec2dcd826a369607ba60e936a659a31f is 107.467ms for 920 entries. Apr 30 12:36:30.894223 systemd-journald[1491]: System Journal (/var/log/journal/ec2dcd826a369607ba60e936a659a31f) is 8M, max 195.6M, 187.6M free. Apr 30 12:36:31.029370 systemd-journald[1491]: Received client request to flush runtime journal. Apr 30 12:36:31.029446 kernel: loop0: detected capacity change from 0 to 194096 Apr 30 12:36:30.956090 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 12:36:30.959285 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 12:36:30.974456 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 12:36:30.991099 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 12:36:31.000734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 12:36:31.022698 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:36:31.034214 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 12:36:31.038807 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 12:36:31.069832 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 12:36:31.093111 kernel: loop1: detected capacity change from 0 to 53784 Apr 30 12:36:31.117614 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 12:36:31.137590 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 12:36:31.150199 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 12:36:31.159196 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Apr 30 12:36:31.159233 systemd-tmpfiles[1545]: ACLs are not supported, ignoring. Apr 30 12:36:31.174411 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 12:36:31.198411 udevadm[1556]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 12:36:31.250439 kernel: loop2: detected capacity change from 0 to 123192 Apr 30 12:36:31.407176 kernel: loop3: detected capacity change from 0 to 113512 Apr 30 12:36:31.529479 kernel: loop4: detected capacity change from 0 to 194096 Apr 30 12:36:31.567771 kernel: loop5: detected capacity change from 0 to 53784 Apr 30 12:36:31.585654 kernel: loop6: detected capacity change from 0 to 123192 Apr 30 12:36:31.602099 kernel: loop7: detected capacity change from 0 to 113512 Apr 30 12:36:31.613095 (sd-merge)[1561]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 12:36:31.614169 (sd-merge)[1561]: Merged extensions into '/usr'. Apr 30 12:36:31.628789 systemd[1]: Reload requested from client PID 1536 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 12:36:31.628820 systemd[1]: Reloading... Apr 30 12:36:31.786092 zram_generator::config[1589]: No configuration found. Apr 30 12:36:32.096146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:36:32.246130 systemd[1]: Reloading finished in 616 ms. Apr 30 12:36:32.272413 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 12:36:32.275708 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 12:36:32.290446 systemd[1]: Starting ensure-sysext.service... Apr 30 12:36:32.296426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 12:36:32.306499 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 12:36:32.324084 ldconfig[1531]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 12:36:32.330842 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 12:36:32.347062 systemd[1]: Reload requested from client PID 1641 ('systemctl') (unit ensure-sysext.service)... Apr 30 12:36:32.347092 systemd[1]: Reloading... Apr 30 12:36:32.366518 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 12:36:32.367876 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 12:36:32.370627 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 12:36:32.371769 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Apr 30 12:36:32.372194 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Apr 30 12:36:32.383774 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:36:32.384192 systemd-tmpfiles[1642]: Skipping /boot Apr 30 12:36:32.418472 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 12:36:32.418685 systemd-tmpfiles[1642]: Skipping /boot Apr 30 12:36:32.482579 systemd-udevd[1643]: Using default interface naming scheme 'v255'. Apr 30 12:36:32.524092 zram_generator::config[1675]: No configuration found. Apr 30 12:36:32.669257 (udev-worker)[1684]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:36:32.905080 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1689) Apr 30 12:36:32.989882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:36:33.241588 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 12:36:33.242580 systemd[1]: Reloading finished in 894 ms. Apr 30 12:36:33.263603 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 12:36:33.288115 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 12:36:33.317630 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 12:36:33.334678 systemd[1]: Finished ensure-sysext.service. Apr 30 12:36:33.386401 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 12:36:33.404373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:36:33.414374 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 12:36:33.416913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 12:36:33.429932 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 12:36:33.436292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 12:36:33.440344 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 12:36:33.444211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 12:36:33.449517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 12:36:33.451743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 12:36:33.454374 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 12:36:33.456648 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 12:36:33.462379 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 12:36:33.471401 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 12:36:33.482386 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 12:36:33.484474 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 12:36:33.488636 lvm[1843]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:36:33.490573 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 12:36:33.499400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 12:36:33.549619 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 12:36:33.553604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 12:36:33.554021 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 12:36:33.576089 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 12:36:33.576544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 12:36:33.578891 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 12:36:33.590842 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 12:36:33.600087 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 12:36:33.602854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 12:36:33.609392 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 12:36:33.626419 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 12:36:33.643039 lvm[1869]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 12:36:33.660786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 12:36:33.662178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 12:36:33.665735 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 12:36:33.666171 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 12:36:33.669497 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 12:36:33.703777 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 12:36:33.719435 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 12:36:33.721990 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 12:36:33.737025 augenrules[1887]: No rules Apr 30 12:36:33.740646 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:36:33.741524 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:36:33.773649 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 12:36:33.777152 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 12:36:33.782811 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 12:36:33.809884 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 12:36:33.831940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 12:36:33.935064 systemd-networkd[1855]: lo: Link UP Apr 30 12:36:33.935084 systemd-networkd[1855]: lo: Gained carrier Apr 30 12:36:33.938279 systemd-networkd[1855]: Enumeration completed Apr 30 12:36:33.938493 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 12:36:33.941010 systemd-networkd[1855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:36:33.941019 systemd-networkd[1855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 12:36:33.943287 systemd-networkd[1855]: eth0: Link UP Apr 30 12:36:33.943571 systemd-networkd[1855]: eth0: Gained carrier Apr 30 12:36:33.943605 systemd-networkd[1855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 12:36:33.950352 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 12:36:33.954346 systemd-networkd[1855]: eth0: DHCPv4 address 172.31.17.143/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 12:36:33.957591 systemd-resolved[1856]: Positive Trust Anchors: Apr 30 12:36:33.957629 systemd-resolved[1856]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 12:36:33.957691 systemd-resolved[1856]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 12:36:33.961471 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 12:36:33.976602 systemd-resolved[1856]: Defaulting to hostname 'linux'. Apr 30 12:36:33.982120 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 12:36:33.984438 systemd[1]: Reached target network.target - Network. Apr 30 12:36:33.991448 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 12:36:33.994229 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 12:36:33.996431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 12:36:33.998887 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 12:36:34.001587 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 12:36:34.003889 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 12:36:34.006304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 12:36:34.008713 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 12:36:34.008775 systemd[1]: Reached target paths.target - Path Units. Apr 30 12:36:34.010630 systemd[1]: Reached target timers.target - Timer Units. Apr 30 12:36:34.014128 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 12:36:34.019167 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 12:36:34.025856 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 12:36:34.028888 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 12:36:34.031519 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 12:36:34.043270 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 12:36:34.046116 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 12:36:34.049809 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 12:36:34.053461 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 12:36:34.056502 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 12:36:34.058449 systemd[1]: Reached target basic.target - Basic System. Apr 30 12:36:34.060386 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:36:34.060438 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 12:36:34.069188 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 12:36:34.076712 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 12:36:34.081428 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 12:36:34.092286 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 12:36:34.097572 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 12:36:34.101278 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 12:36:34.105405 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 12:36:34.119573 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 12:36:34.131706 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 12:36:34.137879 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 12:36:34.150454 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 12:36:34.162398 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 12:36:34.177504 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 12:36:34.183124 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 12:36:34.184003 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 12:36:34.187137 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 12:36:34.193819 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 12:36:34.198237 extend-filesystems[1915]: Found loop4 Apr 30 12:36:34.198237 extend-filesystems[1915]: Found loop5 Apr 30 12:36:34.198237 extend-filesystems[1915]: Found loop6 Apr 30 12:36:34.198237 extend-filesystems[1915]: Found loop7 Apr 30 12:36:34.198237 extend-filesystems[1915]: Found nvme0n1 Apr 30 12:36:34.198237 extend-filesystems[1915]: Found nvme0n1p1 Apr 30 12:36:34.198237 extend-filesystems[1915]: Found nvme0n1p2 Apr 30 12:36:34.198237 extend-filesystems[1915]: Found nvme0n1p3 Apr 30 12:36:34.198237 extend-filesystems[1915]: Found usr Apr 30 12:36:34.198237 extend-filesystems[1915]: Found nvme0n1p4 Apr 30 12:36:34.260090 extend-filesystems[1915]: Found nvme0n1p6 Apr 30 12:36:34.260090 extend-filesystems[1915]: Found nvme0n1p7 Apr 30 12:36:34.260090 extend-filesystems[1915]: Found nvme0n1p9 Apr 30 12:36:34.260090 extend-filesystems[1915]: Checking size of /dev/nvme0n1p9 Apr 30 12:36:34.281299 jq[1914]: false Apr 30 12:36:34.215488 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 12:36:34.220158 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 12:36:34.283554 dbus-daemon[1913]: [system] SELinux support is enabled Apr 30 12:36:34.284803 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 12:36:34.294204 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 12:36:34.294250 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 12:36:34.296774 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 12:36:34.296808 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 12:36:34.313027 dbus-daemon[1913]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1855 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 12:36:34.335573 jq[1927]: true Apr 30 12:36:34.333005 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 12:36:34.333518 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 12:36:34.336219 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 12:36:34.336657 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 12:36:34.354516 extend-filesystems[1915]: Resized partition /dev/nvme0n1p9 Apr 30 12:36:34.356914 (ntainerd)[1942]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 12:36:34.382305 extend-filesystems[1956]: resize2fs 1.47.1 (20-May-2024) Apr 30 12:36:34.389326 tar[1940]: linux-arm64/helm Apr 30 12:36:34.372969 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 12:36:34.396085 jq[1950]: true Apr 30 12:36:34.406719 update_engine[1926]: I20250430 12:36:34.402949 1926 main.cc:92] Flatcar Update Engine starting Apr 30 12:36:34.412778 systemd[1]: Started update-engine.service - Update Engine. Apr 30 12:36:34.417604 update_engine[1926]: I20250430 12:36:34.417346 1926 update_check_scheduler.cc:74] Next update check in 6m10s Apr 30 12:36:34.421792 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 12:36:34.434686 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 12:36:34.479777 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 12:36:34.502821 ntpd[1917]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 21:38:45 UTC 2025 (1): Starting Apr 30 12:36:34.504902 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 21:38:45 UTC 2025 (1): Starting Apr 30 12:36:34.504902 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 12:36:34.504902 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: ---------------------------------------------------- Apr 30 12:36:34.504902 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Apr 30 12:36:34.504902 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 12:36:34.504902 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: corporation. Support and training for ntp-4 are Apr 30 12:36:34.504902 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: available at https://www.nwtime.org/support Apr 30 12:36:34.504902 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: ---------------------------------------------------- Apr 30 12:36:34.502876 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 12:36:34.502896 ntpd[1917]: ---------------------------------------------------- Apr 30 12:36:34.502914 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Apr 30 12:36:34.502931 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 12:36:34.502949 ntpd[1917]: corporation. Support and training for ntp-4 are Apr 30 12:36:34.502966 ntpd[1917]: available at https://www.nwtime.org/support Apr 30 12:36:34.502983 ntpd[1917]: ---------------------------------------------------- Apr 30 12:36:34.514342 ntpd[1917]: proto: precision = 0.096 usec (-23) Apr 30 12:36:34.517122 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: proto: precision = 0.096 usec (-23) Apr 30 12:36:34.519504 ntpd[1917]: basedate set to 2025-04-17 Apr 30 12:36:34.521971 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: basedate set to 2025-04-17 Apr 30 12:36:34.521971 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: gps base set to 2025-04-20 (week 2363) Apr 30 12:36:34.519562 ntpd[1917]: gps base set to 2025-04-20 (week 2363) Apr 30 12:36:34.539578 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: Listen normally on 3 eth0 172.31.17.143:123 Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: Listen normally on 4 lo [::1]:123 Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: bind(21) AF_INET6 fe80::43d:d1ff:fe2a:cfe5%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: unable to create socket on eth0 (5) for fe80::43d:d1ff:fe2a:cfe5%2#123 Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: failed to init interface for address fe80::43d:d1ff:fe2a:cfe5%2 Apr 30 12:36:34.541196 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: Listening on routing socket on fd #21 for interface updates Apr 30 12:36:34.539878 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 12:36:34.540167 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 12:36:34.540229 ntpd[1917]: Listen normally on 3 eth0 172.31.17.143:123 Apr 30 12:36:34.540295 ntpd[1917]: Listen normally on 4 lo [::1]:123 Apr 30 12:36:34.540367 ntpd[1917]: bind(21) AF_INET6 fe80::43d:d1ff:fe2a:cfe5%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:36:34.540405 ntpd[1917]: unable to create socket on eth0 (5) for fe80::43d:d1ff:fe2a:cfe5%2#123 Apr 30 12:36:34.540431 ntpd[1917]: failed to init interface for address fe80::43d:d1ff:fe2a:cfe5%2 Apr 30 12:36:34.540482 ntpd[1917]: Listening on routing socket on fd #21 for interface updates Apr 30 12:36:34.545710 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:36:34.546264 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:36:34.546376 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:36:34.546486 ntpd[1917]: 30 Apr 12:36:34 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 12:36:34.583099 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 12:36:34.625071 extend-filesystems[1956]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 12:36:34.625071 extend-filesystems[1956]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 12:36:34.625071 extend-filesystems[1956]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 12:36:34.615676 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 12:36:34.636955 extend-filesystems[1915]: Resized filesystem in /dev/nvme0n1p9 Apr 30 12:36:34.616108 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 12:36:34.675953 bash[1990]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:36:34.680500 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 12:36:34.694076 systemd[1]: Starting sshkeys.service... Apr 30 12:36:34.731981 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 12:36:34.778269 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 12:36:34.808701 coreos-metadata[1912]: Apr 30 12:36:34.808 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 12:36:34.873724 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1700) Apr 30 12:36:34.870780 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.818 INFO Fetch successful Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.818 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.833 INFO Fetch successful Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.833 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.834 INFO Fetch successful Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.834 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.839 INFO Fetch successful Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.845 INFO Fetch failed with 404: resource not found Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.845 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.846 INFO Fetch successful Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.846 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.846 INFO Fetch successful Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.846 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.848 INFO Fetch successful Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.848 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.857 INFO Fetch successful Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.857 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 12:36:34.873873 coreos-metadata[1912]: Apr 30 12:36:34.862 INFO Fetch successful Apr 30 12:36:34.911700 systemd-logind[1925]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 12:36:34.911752 systemd-logind[1925]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 30 12:36:34.919897 systemd-logind[1925]: New seat seat0. Apr 30 12:36:34.932344 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 12:36:35.112502 containerd[1942]: time="2025-04-30T12:36:35.110502609Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 12:36:35.126575 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 12:36:35.129755 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 12:36:35.231758 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 12:36:35.243275 dbus-daemon[1913]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 12:36:35.245763 dbus-daemon[1913]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1957 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 12:36:35.259342 coreos-metadata[2004]: Apr 30 12:36:35.259 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 12:36:35.261325 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 12:36:35.267995 coreos-metadata[2004]: Apr 30 12:36:35.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 12:36:35.267995 coreos-metadata[2004]: Apr 30 12:36:35.267 INFO Fetch successful Apr 30 12:36:35.267995 coreos-metadata[2004]: Apr 30 12:36:35.267 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 12:36:35.270309 coreos-metadata[2004]: Apr 30 12:36:35.270 INFO Fetch successful Apr 30 12:36:35.276234 unknown[2004]: wrote ssh authorized keys file for user: core Apr 30 12:36:35.295364 locksmithd[1962]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 12:36:35.350683 polkitd[2092]: Started polkitd version 121 Apr 30 12:36:35.366132 containerd[1942]: time="2025-04-30T12:36:35.364621102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:36:35.382695 containerd[1942]: time="2025-04-30T12:36:35.382405834Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:36:35.382695 containerd[1942]: time="2025-04-30T12:36:35.382471810Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 12:36:35.382695 containerd[1942]: time="2025-04-30T12:36:35.382508134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 12:36:35.384032 containerd[1942]: time="2025-04-30T12:36:35.383545294Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 12:36:35.384032 containerd[1942]: time="2025-04-30T12:36:35.383606134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 12:36:35.384032 containerd[1942]: time="2025-04-30T12:36:35.383729938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:36:35.384032 containerd[1942]: time="2025-04-30T12:36:35.383759698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:36:35.386492 update-ssh-keys[2094]: Updated "/home/core/.ssh/authorized_keys" Apr 30 12:36:35.390609 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.395793478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.395838922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.395872018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.395895322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.396127186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.396519754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.396774022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.396802462Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 12:36:35.397718 containerd[1942]: time="2025-04-30T12:36:35.396987598Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 12:36:35.401446 containerd[1942]: time="2025-04-30T12:36:35.399723730Z" level=info msg="metadata content store policy set" policy=shared Apr 30 12:36:35.402240 systemd[1]: Finished sshkeys.service. Apr 30 12:36:35.415256 containerd[1942]: time="2025-04-30T12:36:35.415198427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 12:36:35.421276 containerd[1942]: time="2025-04-30T12:36:35.421215635Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 12:36:35.421480 containerd[1942]: time="2025-04-30T12:36:35.421452791Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 12:36:35.424452 containerd[1942]: time="2025-04-30T12:36:35.424403603Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 12:36:35.428185 containerd[1942]: time="2025-04-30T12:36:35.427267151Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 12:36:35.430100 containerd[1942]: time="2025-04-30T12:36:35.429548771Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 12:36:35.430378 containerd[1942]: time="2025-04-30T12:36:35.429990323Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 12:36:35.431353 containerd[1942]: time="2025-04-30T12:36:35.431285039Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 12:36:35.431726 containerd[1942]: time="2025-04-30T12:36:35.431679503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 12:36:35.431940 containerd[1942]: time="2025-04-30T12:36:35.431910299Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433107899Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433237235Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433276823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433317851Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433355051Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433391063Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433423223Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433451951Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433503527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433535891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433564955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433613855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433648511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434077 containerd[1942]: time="2025-04-30T12:36:35.433678247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433705571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433739159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433770431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433805099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433833635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433860899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433890143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433921139Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433967903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.433998443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.434720 containerd[1942]: time="2025-04-30T12:36:35.434024651Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 12:36:35.436247 containerd[1942]: time="2025-04-30T12:36:35.435645575Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 12:36:35.436247 containerd[1942]: time="2025-04-30T12:36:35.435834791Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 12:36:35.436247 containerd[1942]: time="2025-04-30T12:36:35.435862247Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 12:36:35.436247 containerd[1942]: time="2025-04-30T12:36:35.435917939Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 12:36:35.436247 containerd[1942]: time="2025-04-30T12:36:35.435946211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.436247 containerd[1942]: time="2025-04-30T12:36:35.436000283Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 12:36:35.436247 containerd[1942]: time="2025-04-30T12:36:35.436026959Z" level=info msg="NRI interface is disabled by configuration." Apr 30 12:36:35.436247 containerd[1942]: time="2025-04-30T12:36:35.436097075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 12:36:35.437880 polkitd[2092]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 12:36:35.438002 polkitd[2092]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 12:36:35.438915 containerd[1942]: time="2025-04-30T12:36:35.438586751Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 12:36:35.439593 containerd[1942]: time="2025-04-30T12:36:35.439257383Z" level=info msg="Connect containerd service" Apr 30 12:36:35.440278 containerd[1942]: time="2025-04-30T12:36:35.439895603Z" level=info msg="using legacy CRI server" Apr 30 12:36:35.440278 containerd[1942]: time="2025-04-30T12:36:35.440017619Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 12:36:35.441071 containerd[1942]: time="2025-04-30T12:36:35.440788079Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 12:36:35.442627 polkitd[2092]: Finished loading, compiling and executing 2 rules Apr 30 12:36:35.443538 containerd[1942]: time="2025-04-30T12:36:35.443294843Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:36:35.444181 containerd[1942]: time="2025-04-30T12:36:35.443915327Z" level=info msg="Start subscribing containerd event" Apr 30 12:36:35.444181 containerd[1942]: time="2025-04-30T12:36:35.444006143Z" level=info msg="Start recovering state" Apr 30 12:36:35.444459 containerd[1942]: time="2025-04-30T12:36:35.444428123Z" level=info msg="Start event monitor" Apr 30 12:36:35.444563 containerd[1942]: time="2025-04-30T12:36:35.444538559Z" level=info msg="Start snapshots syncer" Apr 30 12:36:35.445092 containerd[1942]: time="2025-04-30T12:36:35.444651155Z" level=info msg="Start cni network conf syncer for default" Apr 30 12:36:35.445092 containerd[1942]: time="2025-04-30T12:36:35.444677051Z" level=info msg="Start streaming server" Apr 30 12:36:35.445958 containerd[1942]: time="2025-04-30T12:36:35.445897727Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 12:36:35.446595 containerd[1942]: time="2025-04-30T12:36:35.446560247Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 12:36:35.447411 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 12:36:35.450784 dbus-daemon[1913]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 12:36:35.451156 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 12:36:35.454186 containerd[1942]: time="2025-04-30T12:36:35.453983135Z" level=info msg="containerd successfully booted in 0.358430s" Apr 30 12:36:35.455866 polkitd[2092]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 12:36:35.511636 ntpd[1917]: bind(24) AF_INET6 fe80::43d:d1ff:fe2a:cfe5%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:36:35.512487 ntpd[1917]: 30 Apr 12:36:35 ntpd[1917]: bind(24) AF_INET6 fe80::43d:d1ff:fe2a:cfe5%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 12:36:35.512487 ntpd[1917]: 30 Apr 12:36:35 ntpd[1917]: unable to create socket on eth0 (6) for fe80::43d:d1ff:fe2a:cfe5%2#123 Apr 30 12:36:35.512487 ntpd[1917]: 30 Apr 12:36:35 ntpd[1917]: failed to init interface for address fe80::43d:d1ff:fe2a:cfe5%2 Apr 30 12:36:35.511700 ntpd[1917]: unable to create socket on eth0 (6) for fe80::43d:d1ff:fe2a:cfe5%2#123 Apr 30 12:36:35.511728 ntpd[1917]: failed to init interface for address fe80::43d:d1ff:fe2a:cfe5%2 Apr 30 12:36:35.526284 systemd-resolved[1856]: System hostname changed to 'ip-172-31-17-143'. Apr 30 12:36:35.526285 systemd-hostnamed[1957]: Hostname set to (transient) Apr 30 12:36:35.683536 sshd_keygen[1964]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 12:36:35.710205 systemd-networkd[1855]: eth0: Gained IPv6LL Apr 30 12:36:35.717287 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 12:36:35.722957 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 12:36:35.738566 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 12:36:35.747479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:36:35.753924 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 12:36:35.770128 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 12:36:35.784752 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 12:36:35.791854 systemd[1]: Started sshd@0-172.31.17.143:22-139.178.89.65:53824.service - OpenSSH per-connection server daemon (139.178.89.65:53824). Apr 30 12:36:35.848789 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 12:36:35.851791 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 12:36:35.871413 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 12:36:35.893884 amazon-ssm-agent[2126]: Initializing new seelog logger Apr 30 12:36:35.894379 amazon-ssm-agent[2126]: New Seelog Logger Creation Complete Apr 30 12:36:35.894379 amazon-ssm-agent[2126]: 2025/04/30 12:36:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:36:35.894379 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:36:35.897075 amazon-ssm-agent[2126]: 2025/04/30 12:36:35 processing appconfig overrides Apr 30 12:36:35.897075 amazon-ssm-agent[2126]: 2025/04/30 12:36:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:36:35.897075 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:36:35.897075 amazon-ssm-agent[2126]: 2025/04/30 12:36:35 processing appconfig overrides Apr 30 12:36:35.897075 amazon-ssm-agent[2126]: 2025/04/30 12:36:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:36:35.897075 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:36:35.897075 amazon-ssm-agent[2126]: 2025/04/30 12:36:35 processing appconfig overrides Apr 30 12:36:35.897075 amazon-ssm-agent[2126]: 2025-04-30 12:36:35 INFO Proxy environment variables: Apr 30 12:36:35.905374 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 12:36:35.913002 amazon-ssm-agent[2126]: 2025/04/30 12:36:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:36:35.913002 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 12:36:35.913002 amazon-ssm-agent[2126]: 2025/04/30 12:36:35 processing appconfig overrides Apr 30 12:36:35.924093 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 12:36:35.938278 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 12:36:35.952464 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 12:36:35.957476 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 12:36:35.996799 amazon-ssm-agent[2126]: 2025-04-30 12:36:35 INFO https_proxy: Apr 30 12:36:36.100091 amazon-ssm-agent[2126]: 2025-04-30 12:36:35 INFO http_proxy: Apr 30 12:36:36.153181 sshd[2131]: Accepted publickey for core from 139.178.89.65 port 53824 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:36:36.154712 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:36:36.176521 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 12:36:36.188678 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 12:36:36.200848 amazon-ssm-agent[2126]: 2025-04-30 12:36:35 INFO no_proxy: Apr 30 12:36:36.231894 systemd-logind[1925]: New session 1 of user core. Apr 30 12:36:36.253207 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 12:36:36.273529 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 12:36:36.291582 (systemd)[2156]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 12:36:36.296542 tar[1940]: linux-arm64/LICENSE Apr 30 12:36:36.296542 tar[1940]: linux-arm64/README.md Apr 30 12:36:36.299191 amazon-ssm-agent[2126]: 2025-04-30 12:36:35 INFO Checking if agent identity type OnPrem can be assumed Apr 30 12:36:36.311300 systemd-logind[1925]: New session c1 of user core. Apr 30 12:36:36.348170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 12:36:36.397422 amazon-ssm-agent[2126]: 2025-04-30 12:36:35 INFO Checking if agent identity type EC2 can be assumed Apr 30 12:36:36.496504 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO Agent will take identity from EC2 Apr 30 12:36:36.595725 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:36:36.695663 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:36:36.765390 systemd[2156]: Queued start job for default target default.target. Apr 30 12:36:36.772156 systemd[2156]: Created slice app.slice - User Application Slice. Apr 30 12:36:36.773119 systemd[2156]: Reached target paths.target - Paths. Apr 30 12:36:36.773229 systemd[2156]: Reached target timers.target - Timers. Apr 30 12:36:36.776728 systemd[2156]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 12:36:36.795134 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 12:36:36.815883 systemd[2156]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 12:36:36.816204 systemd[2156]: Reached target sockets.target - Sockets. Apr 30 12:36:36.816324 systemd[2156]: Reached target basic.target - Basic System. Apr 30 12:36:36.816432 systemd[2156]: Reached target default.target - Main User Target. Apr 30 12:36:36.816463 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 12:36:36.816494 systemd[2156]: Startup finished in 483ms. Apr 30 12:36:36.825871 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 12:36:36.894674 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 12:36:36.940125 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 30 12:36:36.940325 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 12:36:36.940535 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 12:36:36.940535 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [Registrar] Starting registrar module Apr 30 12:36:36.940535 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 12:36:36.940535 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [EC2Identity] EC2 registration was successful. Apr 30 12:36:36.940535 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [CredentialRefresher] credentialRefresher has started Apr 30 12:36:36.940535 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 12:36:36.940535 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 12:36:36.994689 amazon-ssm-agent[2126]: 2025-04-30 12:36:36 INFO [CredentialRefresher] Next credential rotation will be in 31.3249836852 minutes Apr 30 12:36:37.054632 systemd[1]: Started sshd@1-172.31.17.143:22-139.178.89.65:55298.service - OpenSSH per-connection server daemon (139.178.89.65:55298). Apr 30 12:36:37.336643 sshd[2173]: Accepted publickey for core from 139.178.89.65 port 55298 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:36:37.339292 sshd-session[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:36:37.351433 systemd-logind[1925]: New session 2 of user core. Apr 30 12:36:37.357397 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 12:36:37.484310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:36:37.488728 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 12:36:37.495223 systemd[1]: Startup finished in 1.078s (kernel) + 8.873s (initrd) + 8.887s (userspace) = 18.840s. Apr 30 12:36:37.505854 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:36:37.545278 sshd[2175]: Connection closed by 139.178.89.65 port 55298 Apr 30 12:36:37.545779 sshd-session[2173]: pam_unix(sshd:session): session closed for user core Apr 30 12:36:37.554807 systemd[1]: sshd@1-172.31.17.143:22-139.178.89.65:55298.service: Deactivated successfully. Apr 30 12:36:37.559978 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 12:36:37.561736 systemd-logind[1925]: Session 2 logged out. Waiting for processes to exit. Apr 30 12:36:37.564025 systemd-logind[1925]: Removed session 2. Apr 30 12:36:37.606926 systemd[1]: Started sshd@2-172.31.17.143:22-139.178.89.65:55308.service - OpenSSH per-connection server daemon (139.178.89.65:55308). Apr 30 12:36:37.893898 sshd[2191]: Accepted publickey for core from 139.178.89.65 port 55308 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:36:37.896509 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:36:37.906441 systemd-logind[1925]: New session 3 of user core. Apr 30 12:36:37.913353 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 12:36:37.969845 amazon-ssm-agent[2126]: 2025-04-30 12:36:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 12:36:38.070353 amazon-ssm-agent[2126]: 2025-04-30 12:36:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2199) started Apr 30 12:36:38.090156 sshd[2197]: Connection closed by 139.178.89.65 port 55308 Apr 30 12:36:38.093391 sshd-session[2191]: pam_unix(sshd:session): session closed for user core Apr 30 12:36:38.100376 systemd[1]: sshd@2-172.31.17.143:22-139.178.89.65:55308.service: Deactivated successfully. Apr 30 12:36:38.104860 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 12:36:38.110952 systemd-logind[1925]: Session 3 logged out. Waiting for processes to exit. Apr 30 12:36:38.114626 systemd-logind[1925]: Removed session 3. Apr 30 12:36:38.149615 systemd[1]: Started sshd@3-172.31.17.143:22-139.178.89.65:55314.service - OpenSSH per-connection server daemon (139.178.89.65:55314). Apr 30 12:36:38.170938 amazon-ssm-agent[2126]: 2025-04-30 12:36:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 12:36:38.438670 sshd[2210]: Accepted publickey for core from 139.178.89.65 port 55314 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:36:38.441512 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:36:38.452449 systemd-logind[1925]: New session 4 of user core. Apr 30 12:36:38.457388 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 12:36:38.510482 ntpd[1917]: Listen normally on 7 eth0 [fe80::43d:d1ff:fe2a:cfe5%2]:123 Apr 30 12:36:38.511122 ntpd[1917]: 30 Apr 12:36:38 ntpd[1917]: Listen normally on 7 eth0 [fe80::43d:d1ff:fe2a:cfe5%2]:123 Apr 30 12:36:38.564494 kubelet[2181]: E0430 12:36:38.564404 2181 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:36:38.569254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:36:38.569586 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:36:38.570433 systemd[1]: kubelet.service: Consumed 1.329s CPU time, 240.1M memory peak. Apr 30 12:36:38.635620 sshd[2218]: Connection closed by 139.178.89.65 port 55314 Apr 30 12:36:38.636498 sshd-session[2210]: pam_unix(sshd:session): session closed for user core Apr 30 12:36:38.641608 systemd-logind[1925]: Session 4 logged out. Waiting for processes to exit. Apr 30 12:36:38.642739 systemd[1]: sshd@3-172.31.17.143:22-139.178.89.65:55314.service: Deactivated successfully. Apr 30 12:36:38.646736 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 12:36:38.651237 systemd-logind[1925]: Removed session 4. Apr 30 12:36:38.691557 systemd[1]: Started sshd@4-172.31.17.143:22-139.178.89.65:55326.service - OpenSSH per-connection server daemon (139.178.89.65:55326). Apr 30 12:36:38.971288 sshd[2225]: Accepted publickey for core from 139.178.89.65 port 55326 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:36:38.973556 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:36:38.983362 systemd-logind[1925]: New session 5 of user core. Apr 30 12:36:38.991305 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 12:36:39.144443 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 12:36:39.145126 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:36:39.162713 sudo[2228]: pam_unix(sudo:session): session closed for user root Apr 30 12:36:39.200742 sshd[2227]: Connection closed by 139.178.89.65 port 55326 Apr 30 12:36:39.201843 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Apr 30 12:36:39.208778 systemd[1]: sshd@4-172.31.17.143:22-139.178.89.65:55326.service: Deactivated successfully. Apr 30 12:36:39.212548 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 12:36:39.214536 systemd-logind[1925]: Session 5 logged out. Waiting for processes to exit. Apr 30 12:36:39.216505 systemd-logind[1925]: Removed session 5. Apr 30 12:36:39.255594 systemd[1]: Started sshd@5-172.31.17.143:22-139.178.89.65:55334.service - OpenSSH per-connection server daemon (139.178.89.65:55334). Apr 30 12:36:39.524537 sshd[2234]: Accepted publickey for core from 139.178.89.65 port 55334 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:36:39.527490 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:36:39.535665 systemd-logind[1925]: New session 6 of user core. Apr 30 12:36:39.548301 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 12:36:39.686634 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 12:36:39.687325 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:36:39.694709 sudo[2238]: pam_unix(sudo:session): session closed for user root Apr 30 12:36:39.705293 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 12:36:39.705901 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:36:39.728672 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 12:36:39.776959 augenrules[2260]: No rules Apr 30 12:36:39.778322 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 12:36:39.778741 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 12:36:39.781281 sudo[2237]: pam_unix(sudo:session): session closed for user root Apr 30 12:36:39.818610 sshd[2236]: Connection closed by 139.178.89.65 port 55334 Apr 30 12:36:39.819389 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Apr 30 12:36:39.826229 systemd-logind[1925]: Session 6 logged out. Waiting for processes to exit. Apr 30 12:36:39.826434 systemd[1]: sshd@5-172.31.17.143:22-139.178.89.65:55334.service: Deactivated successfully. Apr 30 12:36:39.830239 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 12:36:39.831829 systemd-logind[1925]: Removed session 6. Apr 30 12:36:39.887503 systemd[1]: Started sshd@6-172.31.17.143:22-139.178.89.65:55348.service - OpenSSH per-connection server daemon (139.178.89.65:55348). Apr 30 12:36:40.154243 sshd[2269]: Accepted publickey for core from 139.178.89.65 port 55348 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:36:40.156859 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:36:40.165637 systemd-logind[1925]: New session 7 of user core. Apr 30 12:36:40.171318 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 12:36:40.314914 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 12:36:40.315705 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 12:36:40.870484 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 12:36:40.870732 (dockerd)[2289]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 12:36:41.224168 dockerd[2289]: time="2025-04-30T12:36:41.223791831Z" level=info msg="Starting up" Apr 30 12:36:41.448544 dockerd[2289]: time="2025-04-30T12:36:41.448181093Z" level=info msg="Loading containers: start." Apr 30 12:36:41.977680 systemd-resolved[1856]: Clock change detected. Flushing caches. Apr 30 12:36:42.164049 kernel: Initializing XFRM netlink socket Apr 30 12:36:42.196057 (udev-worker)[2313]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:36:42.289312 systemd-networkd[1855]: docker0: Link UP Apr 30 12:36:42.332450 dockerd[2289]: time="2025-04-30T12:36:42.332401557Z" level=info msg="Loading containers: done." Apr 30 12:36:42.360007 dockerd[2289]: time="2025-04-30T12:36:42.359417829Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 12:36:42.360007 dockerd[2289]: time="2025-04-30T12:36:42.359558217Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Apr 30 12:36:42.360007 dockerd[2289]: time="2025-04-30T12:36:42.359786841Z" level=info msg="Daemon has completed initialization" Apr 30 12:36:42.420485 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 12:36:42.421794 dockerd[2289]: time="2025-04-30T12:36:42.421063738Z" level=info msg="API listen on /run/docker.sock" Apr 30 12:36:43.560289 containerd[1942]: time="2025-04-30T12:36:43.560177819Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 12:36:44.175158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238565098.mount: Deactivated successfully. Apr 30 12:36:45.600806 containerd[1942]: time="2025-04-30T12:36:45.600749293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:45.604127 containerd[1942]: time="2025-04-30T12:36:45.604005698Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" Apr 30 12:36:45.606004 containerd[1942]: time="2025-04-30T12:36:45.604929194Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:45.611760 containerd[1942]: time="2025-04-30T12:36:45.611663918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:45.614008 containerd[1942]: time="2025-04-30T12:36:45.613852022Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.053616471s" Apr 30 12:36:45.614008 containerd[1942]: time="2025-04-30T12:36:45.613908974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 12:36:45.655115 containerd[1942]: time="2025-04-30T12:36:45.655060562Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 12:36:47.239640 containerd[1942]: time="2025-04-30T12:36:47.239561726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:47.241598 containerd[1942]: time="2025-04-30T12:36:47.241533026Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" Apr 30 12:36:47.243998 containerd[1942]: time="2025-04-30T12:36:47.243894266Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:47.249382 containerd[1942]: time="2025-04-30T12:36:47.249298766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:47.251750 containerd[1942]: time="2025-04-30T12:36:47.251570378Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.596448988s" Apr 30 12:36:47.251750 containerd[1942]: time="2025-04-30T12:36:47.251619722Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 12:36:47.294323 containerd[1942]: time="2025-04-30T12:36:47.294265058Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 12:36:48.343216 containerd[1942]: time="2025-04-30T12:36:48.343162635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:48.346366 containerd[1942]: time="2025-04-30T12:36:48.346308675Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" Apr 30 12:36:48.348242 containerd[1942]: time="2025-04-30T12:36:48.348201243Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:48.355020 containerd[1942]: time="2025-04-30T12:36:48.354952755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:48.357415 containerd[1942]: time="2025-04-30T12:36:48.357374883Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.062783701s" Apr 30 12:36:48.358496 containerd[1942]: time="2025-04-30T12:36:48.358465635Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 12:36:48.398747 containerd[1942]: time="2025-04-30T12:36:48.398682171Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 12:36:49.287648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 12:36:49.296433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:36:49.639502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:36:49.650663 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:36:49.765418 kubelet[2570]: E0430 12:36:49.765331 2570 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:36:49.772425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597693737.mount: Deactivated successfully. Apr 30 12:36:49.775508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:36:49.775815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:36:49.776619 systemd[1]: kubelet.service: Consumed 313ms CPU time, 92.6M memory peak. Apr 30 12:36:50.279117 containerd[1942]: time="2025-04-30T12:36:50.278296253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:50.280469 containerd[1942]: time="2025-04-30T12:36:50.280082849Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" Apr 30 12:36:50.281776 containerd[1942]: time="2025-04-30T12:36:50.281687465Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:50.285567 containerd[1942]: time="2025-04-30T12:36:50.285471665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:50.287292 containerd[1942]: time="2025-04-30T12:36:50.287088977Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.888126462s" Apr 30 12:36:50.287292 containerd[1942]: time="2025-04-30T12:36:50.287149493Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 12:36:50.327675 containerd[1942]: time="2025-04-30T12:36:50.327621485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 12:36:50.872132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296016848.mount: Deactivated successfully. Apr 30 12:36:51.950835 containerd[1942]: time="2025-04-30T12:36:51.950754681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:51.953034 containerd[1942]: time="2025-04-30T12:36:51.952927929Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Apr 30 12:36:51.954097 containerd[1942]: time="2025-04-30T12:36:51.954028977Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:51.959789 containerd[1942]: time="2025-04-30T12:36:51.959707365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:51.962456 containerd[1942]: time="2025-04-30T12:36:51.962236389Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.634554436s" Apr 30 12:36:51.962456 containerd[1942]: time="2025-04-30T12:36:51.962299125Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 12:36:52.000930 containerd[1942]: time="2025-04-30T12:36:52.000887393Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 12:36:52.484456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208934457.mount: Deactivated successfully. Apr 30 12:36:52.493660 containerd[1942]: time="2025-04-30T12:36:52.493582196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:52.495220 containerd[1942]: time="2025-04-30T12:36:52.495110840Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Apr 30 12:36:52.495832 containerd[1942]: time="2025-04-30T12:36:52.495763448Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:52.501427 containerd[1942]: time="2025-04-30T12:36:52.501332624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:52.503268 containerd[1942]: time="2025-04-30T12:36:52.502931084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 501.783471ms" Apr 30 12:36:52.503268 containerd[1942]: time="2025-04-30T12:36:52.502992548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 12:36:52.545358 containerd[1942]: time="2025-04-30T12:36:52.545255012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 12:36:53.067114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171553320.mount: Deactivated successfully. Apr 30 12:36:55.497925 containerd[1942]: time="2025-04-30T12:36:55.497841923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:55.523556 containerd[1942]: time="2025-04-30T12:36:55.523409795Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Apr 30 12:36:55.550305 containerd[1942]: time="2025-04-30T12:36:55.550184159Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:55.596459 containerd[1942]: time="2025-04-30T12:36:55.596405723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:36:55.599712 containerd[1942]: time="2025-04-30T12:36:55.598999067Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.053657475s" Apr 30 12:36:55.599712 containerd[1942]: time="2025-04-30T12:36:55.599069483Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 12:37:00.027096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 12:37:00.036453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:37:00.336284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:37:00.339292 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 12:37:00.422992 kubelet[2756]: E0430 12:37:00.421799 2756 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 12:37:00.425326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 12:37:00.425635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 12:37:00.427532 systemd[1]: kubelet.service: Consumed 275ms CPU time, 94.7M memory peak. Apr 30 12:37:02.541577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:37:02.542335 systemd[1]: kubelet.service: Consumed 275ms CPU time, 94.7M memory peak. Apr 30 12:37:02.549515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:37:02.599209 systemd[1]: Reload requested from client PID 2770 ('systemctl') (unit session-7.scope)... Apr 30 12:37:02.599257 systemd[1]: Reloading... Apr 30 12:37:02.870070 zram_generator::config[2822]: No configuration found. Apr 30 12:37:03.098808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:37:03.327167 systemd[1]: Reloading finished in 727 ms. Apr 30 12:37:03.429334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:37:03.439851 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:37:03.442815 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:37:03.443274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:37:03.443359 systemd[1]: kubelet.service: Consumed 212ms CPU time, 82.3M memory peak. Apr 30 12:37:03.449742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:37:03.751507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:37:03.764537 (kubelet)[2882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:37:03.845303 kubelet[2882]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:37:03.845303 kubelet[2882]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:37:03.845303 kubelet[2882]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:37:03.845863 kubelet[2882]: I0430 12:37:03.845430 2882 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:37:05.346011 kubelet[2882]: I0430 12:37:05.345368 2882 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:37:05.346011 kubelet[2882]: I0430 12:37:05.345417 2882 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:37:05.346011 kubelet[2882]: I0430 12:37:05.345735 2882 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:37:05.377116 kubelet[2882]: E0430 12:37:05.377057 2882 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.377654 kubelet[2882]: I0430 12:37:05.377467 2882 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:37:05.393601 kubelet[2882]: I0430 12:37:05.393535 2882 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:37:05.396115 kubelet[2882]: I0430 12:37:05.396039 2882 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:37:05.396475 kubelet[2882]: I0430 12:37:05.396111 2882 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:37:05.396683 kubelet[2882]: I0430 12:37:05.396498 2882 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:37:05.396683 kubelet[2882]: I0430 12:37:05.396521 2882 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:37:05.396811 kubelet[2882]: I0430 12:37:05.396779 2882 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:37:05.399410 kubelet[2882]: I0430 12:37:05.398351 2882 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:37:05.399410 kubelet[2882]: I0430 12:37:05.398390 2882 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:37:05.399410 kubelet[2882]: I0430 12:37:05.398471 2882 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:37:05.399410 kubelet[2882]: I0430 12:37:05.398499 2882 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:37:05.400326 kubelet[2882]: I0430 12:37:05.400292 2882 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:37:05.400795 kubelet[2882]: I0430 12:37:05.400771 2882 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:37:05.400991 kubelet[2882]: W0430 12:37:05.400950 2882 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 12:37:05.402187 kubelet[2882]: I0430 12:37:05.402155 2882 server.go:1264] "Started kubelet" Apr 30 12:37:05.402557 kubelet[2882]: W0430 12:37:05.402498 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.402723 kubelet[2882]: E0430 12:37:05.402700 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.402954 kubelet[2882]: W0430 12:37:05.402906 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-143&limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.403609 kubelet[2882]: E0430 12:37:05.403116 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-143&limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.410109 kubelet[2882]: I0430 12:37:05.410068 2882 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:37:05.416678 kubelet[2882]: I0430 12:37:05.416600 2882 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:37:05.418369 kubelet[2882]: I0430 12:37:05.418319 2882 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:37:05.420469 kubelet[2882]: I0430 12:37:05.419883 2882 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:37:05.420469 kubelet[2882]: I0430 12:37:05.420286 2882 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:37:05.422070 kubelet[2882]: E0430 12:37:05.421320 2882 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.143:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.143:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-143.183b18deb9050604 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-143,UID:ip-172-31-17-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-143,},FirstTimestamp:2025-04-30 12:37:05.402119684 +0000 UTC m=+1.631435697,LastTimestamp:2025-04-30 12:37:05.402119684 +0000 UTC m=+1.631435697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-143,}" Apr 30 12:37:05.422070 kubelet[2882]: I0430 12:37:05.421585 2882 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:37:05.422070 kubelet[2882]: I0430 12:37:05.421748 2882 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:37:05.424295 kubelet[2882]: I0430 12:37:05.424263 2882 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:37:05.425282 kubelet[2882]: W0430 12:37:05.425104 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.425282 kubelet[2882]: E0430 12:37:05.425217 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.427105 kubelet[2882]: E0430 12:37:05.426806 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-143?timeout=10s\": dial tcp 172.31.17.143:6443: connect: connection refused" interval="200ms" Apr 30 12:37:05.427105 kubelet[2882]: E0430 12:37:05.426947 2882 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:37:05.427888 kubelet[2882]: I0430 12:37:05.427561 2882 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:37:05.427888 kubelet[2882]: I0430 12:37:05.427724 2882 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:37:05.430577 kubelet[2882]: I0430 12:37:05.430543 2882 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:37:05.446067 kubelet[2882]: I0430 12:37:05.444856 2882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:37:05.453803 kubelet[2882]: I0430 12:37:05.453761 2882 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:37:05.454136 kubelet[2882]: I0430 12:37:05.454082 2882 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:37:05.454273 kubelet[2882]: I0430 12:37:05.454253 2882 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:37:05.456119 kubelet[2882]: E0430 12:37:05.456075 2882 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:37:05.456925 kubelet[2882]: W0430 12:37:05.456862 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.457502 kubelet[2882]: E0430 12:37:05.457470 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:05.470352 kubelet[2882]: I0430 12:37:05.470320 2882 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:37:05.470857 kubelet[2882]: I0430 12:37:05.470695 2882 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:37:05.470857 kubelet[2882]: I0430 12:37:05.470733 2882 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:37:05.475576 kubelet[2882]: I0430 12:37:05.475547 2882 policy_none.go:49] "None policy: Start" Apr 30 12:37:05.476903 kubelet[2882]: I0430 12:37:05.476864 2882 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:37:05.477048 kubelet[2882]: I0430 12:37:05.476924 2882 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:37:05.490166 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 12:37:05.509469 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 12:37:05.515614 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 12:37:05.525757 kubelet[2882]: I0430 12:37:05.524607 2882 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:37:05.525757 kubelet[2882]: I0430 12:37:05.524882 2882 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:37:05.525757 kubelet[2882]: I0430 12:37:05.525078 2882 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:37:05.530003 kubelet[2882]: I0430 12:37:05.529028 2882 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-143" Apr 30 12:37:05.530003 kubelet[2882]: E0430 12:37:05.529542 2882 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-143\" not found" Apr 30 12:37:05.530003 kubelet[2882]: E0430 12:37:05.529678 2882 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.143:6443/api/v1/nodes\": dial tcp 172.31.17.143:6443: connect: connection refused" node="ip-172-31-17-143" Apr 30 12:37:05.557271 kubelet[2882]: I0430 12:37:05.557187 2882 topology_manager.go:215] "Topology Admit Handler" podUID="93c4c23ff10efe29e236c61d5afcec82" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-143" Apr 30 12:37:05.559592 kubelet[2882]: I0430 12:37:05.559544 2882 topology_manager.go:215] "Topology Admit Handler" podUID="e868639dd263ef4937913b804420a35b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:05.566029 kubelet[2882]: I0430 12:37:05.565894 2882 topology_manager.go:215] "Topology Admit Handler" podUID="a86f18d834d3c17fad2f2ea26934f55c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-143" Apr 30 12:37:05.581917 systemd[1]: Created slice kubepods-burstable-pod93c4c23ff10efe29e236c61d5afcec82.slice - libcontainer container kubepods-burstable-pod93c4c23ff10efe29e236c61d5afcec82.slice. Apr 30 12:37:05.607168 systemd[1]: Created slice kubepods-burstable-pode868639dd263ef4937913b804420a35b.slice - libcontainer container kubepods-burstable-pode868639dd263ef4937913b804420a35b.slice. Apr 30 12:37:05.618813 systemd[1]: Created slice kubepods-burstable-poda86f18d834d3c17fad2f2ea26934f55c.slice - libcontainer container kubepods-burstable-poda86f18d834d3c17fad2f2ea26934f55c.slice. Apr 30 12:37:05.625549 kubelet[2882]: I0430 12:37:05.625478 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:05.625549 kubelet[2882]: I0430 12:37:05.625547 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93c4c23ff10efe29e236c61d5afcec82-ca-certs\") pod \"kube-apiserver-ip-172-31-17-143\" (UID: \"93c4c23ff10efe29e236c61d5afcec82\") " pod="kube-system/kube-apiserver-ip-172-31-17-143" Apr 30 12:37:05.625749 kubelet[2882]: I0430 12:37:05.625590 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93c4c23ff10efe29e236c61d5afcec82-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-143\" (UID: \"93c4c23ff10efe29e236c61d5afcec82\") " pod="kube-system/kube-apiserver-ip-172-31-17-143" Apr 30 12:37:05.625749 kubelet[2882]: I0430 12:37:05.625628 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:05.625749 kubelet[2882]: I0430 12:37:05.625665 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:05.625749 kubelet[2882]: I0430 12:37:05.625703 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a86f18d834d3c17fad2f2ea26934f55c-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-143\" (UID: \"a86f18d834d3c17fad2f2ea26934f55c\") " pod="kube-system/kube-scheduler-ip-172-31-17-143" Apr 30 12:37:05.625749 kubelet[2882]: I0430 12:37:05.625738 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93c4c23ff10efe29e236c61d5afcec82-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-143\" (UID: \"93c4c23ff10efe29e236c61d5afcec82\") " pod="kube-system/kube-apiserver-ip-172-31-17-143" Apr 30 12:37:05.626022 kubelet[2882]: I0430 12:37:05.625771 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:05.626022 kubelet[2882]: I0430 12:37:05.625807 2882 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:05.627956 kubelet[2882]: E0430 12:37:05.627880 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-143?timeout=10s\": dial tcp 172.31.17.143:6443: connect: connection refused" interval="400ms" Apr 30 12:37:05.733086 kubelet[2882]: I0430 12:37:05.733019 2882 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-143" Apr 30 12:37:05.733587 kubelet[2882]: E0430 12:37:05.733529 2882 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.143:6443/api/v1/nodes\": dial tcp 172.31.17.143:6443: connect: connection refused" node="ip-172-31-17-143" Apr 30 12:37:05.901987 containerd[1942]: time="2025-04-30T12:37:05.901766866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-143,Uid:93c4c23ff10efe29e236c61d5afcec82,Namespace:kube-system,Attempt:0,}" Apr 30 12:37:05.914828 containerd[1942]: time="2025-04-30T12:37:05.914696158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-143,Uid:e868639dd263ef4937913b804420a35b,Namespace:kube-system,Attempt:0,}" Apr 30 12:37:05.924408 containerd[1942]: time="2025-04-30T12:37:05.924352738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-143,Uid:a86f18d834d3c17fad2f2ea26934f55c,Namespace:kube-system,Attempt:0,}" Apr 30 12:37:06.027395 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 12:37:06.031193 kubelet[2882]: E0430 12:37:06.031123 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-143?timeout=10s\": dial tcp 172.31.17.143:6443: connect: connection refused" interval="800ms" Apr 30 12:37:06.136489 kubelet[2882]: I0430 12:37:06.136132 2882 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-143" Apr 30 12:37:06.136640 kubelet[2882]: E0430 12:37:06.136591 2882 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.143:6443/api/v1/nodes\": dial tcp 172.31.17.143:6443: connect: connection refused" node="ip-172-31-17-143" Apr 30 12:37:06.429858 kubelet[2882]: W0430 12:37:06.429733 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-143&limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:06.429858 kubelet[2882]: E0430 12:37:06.429822 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-143&limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:06.468614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2144744781.mount: Deactivated successfully. Apr 30 12:37:06.482185 containerd[1942]: time="2025-04-30T12:37:06.482108337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:37:06.486346 containerd[1942]: time="2025-04-30T12:37:06.486271749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 12:37:06.491009 containerd[1942]: time="2025-04-30T12:37:06.490892457Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:37:06.495319 containerd[1942]: time="2025-04-30T12:37:06.495239445Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:37:06.497604 containerd[1942]: time="2025-04-30T12:37:06.497526573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:37:06.502993 containerd[1942]: time="2025-04-30T12:37:06.501163965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:37:06.503534 containerd[1942]: time="2025-04-30T12:37:06.503464569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 12:37:06.505749 containerd[1942]: time="2025-04-30T12:37:06.505689345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 12:37:06.509487 containerd[1942]: time="2025-04-30T12:37:06.509425953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 594.626787ms" Apr 30 12:37:06.511093 containerd[1942]: time="2025-04-30T12:37:06.511014105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.100707ms" Apr 30 12:37:06.518986 containerd[1942]: time="2025-04-30T12:37:06.518900301Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 594.440931ms" Apr 30 12:37:06.581669 kubelet[2882]: W0430 12:37:06.580997 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:06.581669 kubelet[2882]: E0430 12:37:06.581092 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:06.588498 kubelet[2882]: W0430 12:37:06.588218 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:06.588498 kubelet[2882]: E0430 12:37:06.588339 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:06.727611 containerd[1942]: time="2025-04-30T12:37:06.727013182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:37:06.727611 containerd[1942]: time="2025-04-30T12:37:06.727143502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:37:06.729287 containerd[1942]: time="2025-04-30T12:37:06.729135490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:37:06.729287 containerd[1942]: time="2025-04-30T12:37:06.729245506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:37:06.730220 containerd[1942]: time="2025-04-30T12:37:06.729901990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:06.730220 containerd[1942]: time="2025-04-30T12:37:06.729634930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:06.730406 containerd[1942]: time="2025-04-30T12:37:06.730133818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:06.733422 containerd[1942]: time="2025-04-30T12:37:06.732013078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:37:06.733422 containerd[1942]: time="2025-04-30T12:37:06.732134110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:37:06.733422 containerd[1942]: time="2025-04-30T12:37:06.732170038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:06.733422 containerd[1942]: time="2025-04-30T12:37:06.732334558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:06.734295 containerd[1942]: time="2025-04-30T12:37:06.734039794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:06.746882 kubelet[2882]: W0430 12:37:06.746790 2882 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:06.746882 kubelet[2882]: E0430 12:37:06.746890 2882 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.143:6443: connect: connection refused Apr 30 12:37:06.794707 systemd[1]: Started cri-containerd-1673d135f108ff4defc9963beba03560eced833769a40859200f68a6389d0fe9.scope - libcontainer container 1673d135f108ff4defc9963beba03560eced833769a40859200f68a6389d0fe9. Apr 30 12:37:06.805404 systemd[1]: Started cri-containerd-f924e0e229b30b17ae957347916ea619c0ec1894a9283cca1975ba8a2d8bcdbb.scope - libcontainer container f924e0e229b30b17ae957347916ea619c0ec1894a9283cca1975ba8a2d8bcdbb. Apr 30 12:37:06.812082 systemd[1]: Started cri-containerd-fbfa73423b3fc929e4c91b30d4cb99be96a970e8b8171da97ef59346bc99edcd.scope - libcontainer container fbfa73423b3fc929e4c91b30d4cb99be96a970e8b8171da97ef59346bc99edcd. Apr 30 12:37:06.834213 kubelet[2882]: E0430 12:37:06.833956 2882 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-143?timeout=10s\": dial tcp 172.31.17.143:6443: connect: connection refused" interval="1.6s" Apr 30 12:37:06.924682 containerd[1942]: time="2025-04-30T12:37:06.924439715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-143,Uid:e868639dd263ef4937913b804420a35b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1673d135f108ff4defc9963beba03560eced833769a40859200f68a6389d0fe9\"" Apr 30 12:37:06.931691 containerd[1942]: time="2025-04-30T12:37:06.931213955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-143,Uid:a86f18d834d3c17fad2f2ea26934f55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f924e0e229b30b17ae957347916ea619c0ec1894a9283cca1975ba8a2d8bcdbb\"" Apr 30 12:37:06.943484 containerd[1942]: time="2025-04-30T12:37:06.943380588Z" level=info msg="CreateContainer within sandbox \"1673d135f108ff4defc9963beba03560eced833769a40859200f68a6389d0fe9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 12:37:06.944063 containerd[1942]: time="2025-04-30T12:37:06.943897668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-143,Uid:93c4c23ff10efe29e236c61d5afcec82,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbfa73423b3fc929e4c91b30d4cb99be96a970e8b8171da97ef59346bc99edcd\"" Apr 30 12:37:06.944167 kubelet[2882]: I0430 12:37:06.944136 2882 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-143" Apr 30 12:37:06.945459 kubelet[2882]: E0430 12:37:06.945311 2882 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.143:6443/api/v1/nodes\": dial tcp 172.31.17.143:6443: connect: connection refused" node="ip-172-31-17-143" Apr 30 12:37:06.946701 containerd[1942]: time="2025-04-30T12:37:06.946512828Z" level=info msg="CreateContainer within sandbox \"f924e0e229b30b17ae957347916ea619c0ec1894a9283cca1975ba8a2d8bcdbb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 12:37:06.953334 containerd[1942]: time="2025-04-30T12:37:06.953262672Z" level=info msg="CreateContainer within sandbox \"fbfa73423b3fc929e4c91b30d4cb99be96a970e8b8171da97ef59346bc99edcd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 12:37:06.979021 containerd[1942]: time="2025-04-30T12:37:06.978277080Z" level=info msg="CreateContainer within sandbox \"1673d135f108ff4defc9963beba03560eced833769a40859200f68a6389d0fe9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a\"" Apr 30 12:37:06.979276 containerd[1942]: time="2025-04-30T12:37:06.979217400Z" level=info msg="StartContainer for \"6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a\"" Apr 30 12:37:07.009002 containerd[1942]: time="2025-04-30T12:37:07.008859044Z" level=info msg="CreateContainer within sandbox \"f924e0e229b30b17ae957347916ea619c0ec1894a9283cca1975ba8a2d8bcdbb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36\"" Apr 30 12:37:07.010997 containerd[1942]: time="2025-04-30T12:37:07.009585944Z" level=info msg="StartContainer for \"374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36\"" Apr 30 12:37:07.034273 systemd[1]: Started cri-containerd-6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a.scope - libcontainer container 6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a. Apr 30 12:37:07.050612 containerd[1942]: time="2025-04-30T12:37:07.050552108Z" level=info msg="CreateContainer within sandbox \"fbfa73423b3fc929e4c91b30d4cb99be96a970e8b8171da97ef59346bc99edcd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09b528aa9262cc6f360b959d93a017ddf74119ba16362353f5a078fda6b459bd\"" Apr 30 12:37:07.052326 containerd[1942]: time="2025-04-30T12:37:07.052270028Z" level=info msg="StartContainer for \"09b528aa9262cc6f360b959d93a017ddf74119ba16362353f5a078fda6b459bd\"" Apr 30 12:37:07.088497 systemd[1]: Started cri-containerd-374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36.scope - libcontainer container 374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36. Apr 30 12:37:07.135400 systemd[1]: Started cri-containerd-09b528aa9262cc6f360b959d93a017ddf74119ba16362353f5a078fda6b459bd.scope - libcontainer container 09b528aa9262cc6f360b959d93a017ddf74119ba16362353f5a078fda6b459bd. Apr 30 12:37:07.177655 containerd[1942]: time="2025-04-30T12:37:07.177085389Z" level=info msg="StartContainer for \"6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a\" returns successfully" Apr 30 12:37:07.254112 containerd[1942]: time="2025-04-30T12:37:07.253802445Z" level=info msg="StartContainer for \"374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36\" returns successfully" Apr 30 12:37:07.268752 containerd[1942]: time="2025-04-30T12:37:07.268679001Z" level=info msg="StartContainer for \"09b528aa9262cc6f360b959d93a017ddf74119ba16362353f5a078fda6b459bd\" returns successfully" Apr 30 12:37:08.548436 kubelet[2882]: I0430 12:37:08.548378 2882 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-143" Apr 30 12:37:13.053304 kubelet[2882]: E0430 12:37:13.053236 2882 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-143\" not found" node="ip-172-31-17-143" Apr 30 12:37:13.103700 kubelet[2882]: E0430 12:37:13.103176 2882 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-143.183b18deb9050604 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-143,UID:ip-172-31-17-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-143,},FirstTimestamp:2025-04-30 12:37:05.402119684 +0000 UTC m=+1.631435697,LastTimestamp:2025-04-30 12:37:05.402119684 +0000 UTC m=+1.631435697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-143,}" Apr 30 12:37:13.168954 kubelet[2882]: I0430 12:37:13.168899 2882 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-143" Apr 30 12:37:13.169199 kubelet[2882]: E0430 12:37:13.169050 2882 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-143.183b18deba7f7e64 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-143,UID:ip-172-31-17-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-17-143,},FirstTimestamp:2025-04-30 12:37:05.426923108 +0000 UTC m=+1.656239157,LastTimestamp:2025-04-30 12:37:05.426923108 +0000 UTC m=+1.656239157,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-143,}" Apr 30 12:37:13.229503 kubelet[2882]: E0430 12:37:13.229349 2882 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-143.183b18debd0080e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-143,UID:ip-172-31-17-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-17-143 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-17-143,},FirstTimestamp:2025-04-30 12:37:05.468932324 +0000 UTC m=+1.698248361,LastTimestamp:2025-04-30 12:37:05.468932324 +0000 UTC m=+1.698248361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-143,}" Apr 30 12:37:13.384549 kubelet[2882]: E0430 12:37:13.384484 2882 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-143\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-143" Apr 30 12:37:13.402846 kubelet[2882]: I0430 12:37:13.402795 2882 apiserver.go:52] "Watching apiserver" Apr 30 12:37:13.422559 kubelet[2882]: I0430 12:37:13.422466 2882 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:37:15.226622 systemd[1]: Reload requested from client PID 3162 ('systemctl') (unit session-7.scope)... Apr 30 12:37:15.227154 systemd[1]: Reloading... Apr 30 12:37:15.509028 zram_generator::config[3211]: No configuration found. Apr 30 12:37:15.773022 kubelet[2882]: I0430 12:37:15.772171 2882 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-143" podStartSLOduration=2.772148707 podStartE2EDuration="2.772148707s" podCreationTimestamp="2025-04-30 12:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:37:15.523209642 +0000 UTC m=+11.752525691" watchObservedRunningTime="2025-04-30 12:37:15.772148707 +0000 UTC m=+12.001464732" Apr 30 12:37:15.777378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 12:37:16.047643 systemd[1]: Reloading finished in 819 ms. Apr 30 12:37:16.104820 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:37:16.121766 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 12:37:16.122461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:37:16.122699 systemd[1]: kubelet.service: Consumed 2.394s CPU time, 112M memory peak. Apr 30 12:37:16.130560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 12:37:16.460310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 12:37:16.471555 (kubelet)[3269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 12:37:16.570222 kubelet[3269]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:37:16.570222 kubelet[3269]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 12:37:16.570222 kubelet[3269]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 12:37:16.570856 kubelet[3269]: I0430 12:37:16.570316 3269 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 12:37:16.579795 kubelet[3269]: I0430 12:37:16.579738 3269 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 12:37:16.579795 kubelet[3269]: I0430 12:37:16.579780 3269 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 12:37:16.580466 kubelet[3269]: I0430 12:37:16.580385 3269 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 12:37:16.582933 kubelet[3269]: I0430 12:37:16.582773 3269 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 12:37:16.590879 kubelet[3269]: I0430 12:37:16.588654 3269 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 12:37:16.609534 kubelet[3269]: I0430 12:37:16.609437 3269 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 12:37:16.610761 kubelet[3269]: I0430 12:37:16.610623 3269 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 12:37:16.611796 kubelet[3269]: I0430 12:37:16.610854 3269 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 12:37:16.611796 kubelet[3269]: I0430 12:37:16.611495 3269 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 12:37:16.611796 kubelet[3269]: I0430 12:37:16.611515 3269 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 12:37:16.611796 kubelet[3269]: I0430 12:37:16.611579 3269 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:37:16.612352 kubelet[3269]: I0430 12:37:16.612283 3269 kubelet.go:400] "Attempting to sync node with API server" Apr 30 12:37:16.613287 kubelet[3269]: I0430 12:37:16.613254 3269 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 12:37:16.614033 kubelet[3269]: I0430 12:37:16.613480 3269 kubelet.go:312] "Adding apiserver pod source" Apr 30 12:37:16.614033 kubelet[3269]: I0430 12:37:16.613524 3269 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 12:37:16.617665 kubelet[3269]: I0430 12:37:16.617626 3269 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 12:37:16.619316 sudo[3282]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 12:37:16.620148 sudo[3282]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 12:37:16.623811 kubelet[3269]: I0430 12:37:16.620788 3269 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 12:37:16.623811 kubelet[3269]: I0430 12:37:16.621528 3269 server.go:1264] "Started kubelet" Apr 30 12:37:16.634165 kubelet[3269]: I0430 12:37:16.634107 3269 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 12:37:16.648683 kubelet[3269]: I0430 12:37:16.647706 3269 server.go:455] "Adding debug handlers to kubelet server" Apr 30 12:37:16.657018 kubelet[3269]: I0430 12:37:16.638593 3269 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 12:37:16.660235 kubelet[3269]: I0430 12:37:16.635270 3269 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 12:37:16.669075 kubelet[3269]: I0430 12:37:16.669034 3269 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 12:37:16.673987 kubelet[3269]: I0430 12:37:16.660449 3269 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 12:37:16.681049 kubelet[3269]: I0430 12:37:16.679841 3269 factory.go:221] Registration of the systemd container factory successfully Apr 30 12:37:16.681049 kubelet[3269]: I0430 12:37:16.680563 3269 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 12:37:16.681305 kubelet[3269]: I0430 12:37:16.660430 3269 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 12:37:16.687086 kubelet[3269]: I0430 12:37:16.681917 3269 reconciler.go:26] "Reconciler: start to sync state" Apr 30 12:37:16.696056 kubelet[3269]: I0430 12:37:16.694862 3269 factory.go:221] Registration of the containerd container factory successfully Apr 30 12:37:16.721776 kubelet[3269]: E0430 12:37:16.720575 3269 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 12:37:16.758947 kubelet[3269]: I0430 12:37:16.758843 3269 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 12:37:16.776214 kubelet[3269]: I0430 12:37:16.775260 3269 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 12:37:16.780089 kubelet[3269]: I0430 12:37:16.777052 3269 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 12:37:16.780089 kubelet[3269]: I0430 12:37:16.777107 3269 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 12:37:16.780089 kubelet[3269]: E0430 12:37:16.777184 3269 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 12:37:16.780089 kubelet[3269]: I0430 12:37:16.775333 3269 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-143" Apr 30 12:37:16.803991 kubelet[3269]: I0430 12:37:16.802463 3269 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-17-143" Apr 30 12:37:16.803991 kubelet[3269]: I0430 12:37:16.802594 3269 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-143" Apr 30 12:37:16.866724 kubelet[3269]: I0430 12:37:16.866284 3269 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 12:37:16.866724 kubelet[3269]: I0430 12:37:16.866317 3269 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 12:37:16.866724 kubelet[3269]: I0430 12:37:16.866353 3269 state_mem.go:36] "Initialized new in-memory state store" Apr 30 12:37:16.866724 kubelet[3269]: I0430 12:37:16.866598 3269 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 12:37:16.866724 kubelet[3269]: I0430 12:37:16.866618 3269 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 12:37:16.866724 kubelet[3269]: I0430 12:37:16.866653 3269 policy_none.go:49] "None policy: Start" Apr 30 12:37:16.868862 kubelet[3269]: I0430 12:37:16.868530 3269 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 12:37:16.868862 kubelet[3269]: I0430 12:37:16.868580 3269 state_mem.go:35] "Initializing new in-memory state store" Apr 30 12:37:16.868862 kubelet[3269]: I0430 12:37:16.869110 3269 state_mem.go:75] "Updated machine memory state" Apr 30 12:37:16.878788 kubelet[3269]: E0430 12:37:16.877244 3269 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 12:37:16.878788 kubelet[3269]: I0430 12:37:16.878136 3269 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 12:37:16.880425 kubelet[3269]: I0430 12:37:16.879538 3269 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 12:37:16.881792 kubelet[3269]: I0430 12:37:16.880739 3269 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 12:37:17.078160 kubelet[3269]: I0430 12:37:17.077871 3269 topology_manager.go:215] "Topology Admit Handler" podUID="93c4c23ff10efe29e236c61d5afcec82" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-143" Apr 30 12:37:17.078835 kubelet[3269]: I0430 12:37:17.078427 3269 topology_manager.go:215] "Topology Admit Handler" podUID="e868639dd263ef4937913b804420a35b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:17.080445 kubelet[3269]: I0430 12:37:17.080396 3269 topology_manager.go:215] "Topology Admit Handler" podUID="a86f18d834d3c17fad2f2ea26934f55c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-143" Apr 30 12:37:17.089811 kubelet[3269]: I0430 12:37:17.089736 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93c4c23ff10efe29e236c61d5afcec82-ca-certs\") pod \"kube-apiserver-ip-172-31-17-143\" (UID: \"93c4c23ff10efe29e236c61d5afcec82\") " pod="kube-system/kube-apiserver-ip-172-31-17-143" Apr 30 12:37:17.089811 kubelet[3269]: I0430 12:37:17.089804 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93c4c23ff10efe29e236c61d5afcec82-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-143\" (UID: \"93c4c23ff10efe29e236c61d5afcec82\") " pod="kube-system/kube-apiserver-ip-172-31-17-143" Apr 30 12:37:17.090054 kubelet[3269]: I0430 12:37:17.089846 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93c4c23ff10efe29e236c61d5afcec82-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-143\" (UID: \"93c4c23ff10efe29e236c61d5afcec82\") " pod="kube-system/kube-apiserver-ip-172-31-17-143" Apr 30 12:37:17.090054 kubelet[3269]: I0430 12:37:17.089889 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:17.090054 kubelet[3269]: I0430 12:37:17.089927 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:17.090054 kubelet[3269]: I0430 12:37:17.089981 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:17.090054 kubelet[3269]: I0430 12:37:17.090025 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:17.090311 kubelet[3269]: I0430 12:37:17.090063 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e868639dd263ef4937913b804420a35b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-143\" (UID: \"e868639dd263ef4937913b804420a35b\") " pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:17.090311 kubelet[3269]: I0430 12:37:17.090100 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a86f18d834d3c17fad2f2ea26934f55c-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-143\" (UID: \"a86f18d834d3c17fad2f2ea26934f55c\") " pod="kube-system/kube-scheduler-ip-172-31-17-143" Apr 30 12:37:17.108655 kubelet[3269]: E0430 12:37:17.108524 3269 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-17-143\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-143" Apr 30 12:37:17.111487 kubelet[3269]: E0430 12:37:17.111088 3269 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-17-143\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-143" Apr 30 12:37:17.530576 sudo[3282]: pam_unix(sudo:session): session closed for user root Apr 30 12:37:17.632049 kubelet[3269]: I0430 12:37:17.631959 3269 apiserver.go:52] "Watching apiserver" Apr 30 12:37:17.679897 kubelet[3269]: I0430 12:37:17.679781 3269 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 12:37:17.871578 kubelet[3269]: E0430 12:37:17.871517 3269 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-143\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-143" Apr 30 12:37:17.889108 kubelet[3269]: I0430 12:37:17.889000 3269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-143" podStartSLOduration=0.888962398 podStartE2EDuration="888.962398ms" podCreationTimestamp="2025-04-30 12:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:37:17.871319182 +0000 UTC m=+1.392766472" watchObservedRunningTime="2025-04-30 12:37:17.888962398 +0000 UTC m=+1.410409700" Apr 30 12:37:17.889634 kubelet[3269]: I0430 12:37:17.889199 3269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-143" podStartSLOduration=2.889189318 podStartE2EDuration="2.889189318s" podCreationTimestamp="2025-04-30 12:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:37:17.885653914 +0000 UTC m=+1.407101216" watchObservedRunningTime="2025-04-30 12:37:17.889189318 +0000 UTC m=+1.410636584" Apr 30 12:37:20.125136 update_engine[1926]: I20250430 12:37:20.125036 1926 update_attempter.cc:509] Updating boot flags... Apr 30 12:37:20.269439 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3335) Apr 30 12:37:20.739016 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3339) Apr 30 12:37:21.244825 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3339) Apr 30 12:37:21.537087 sudo[2272]: pam_unix(sudo:session): session closed for user root Apr 30 12:37:21.578027 sshd[2271]: Connection closed by 139.178.89.65 port 55348 Apr 30 12:37:21.578876 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Apr 30 12:37:21.597104 systemd[1]: sshd@6-172.31.17.143:22-139.178.89.65:55348.service: Deactivated successfully. Apr 30 12:37:21.601471 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 12:37:21.605238 systemd[1]: session-7.scope: Consumed 11.719s CPU time, 294.4M memory peak. Apr 30 12:37:21.629693 systemd-logind[1925]: Session 7 logged out. Waiting for processes to exit. Apr 30 12:37:21.637956 systemd-logind[1925]: Removed session 7. Apr 30 12:37:29.145427 kubelet[3269]: I0430 12:37:29.145326 3269 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 12:37:29.147792 containerd[1942]: time="2025-04-30T12:37:29.147558642Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 12:37:29.149007 kubelet[3269]: I0430 12:37:29.148818 3269 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 12:37:29.743697 kubelet[3269]: I0430 12:37:29.743593 3269 topology_manager.go:215] "Topology Admit Handler" podUID="5cead4b0-c986-4422-8bf6-2bcab200b7cc" podNamespace="kube-system" podName="kube-proxy-mdzdb" Apr 30 12:37:29.747691 kubelet[3269]: I0430 12:37:29.747630 3269 topology_manager.go:215] "Topology Admit Handler" podUID="afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" podNamespace="kube-system" podName="cilium-zkz9h" Apr 30 12:37:29.757637 kubelet[3269]: W0430 12:37:29.757529 3269 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-143" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-143' and this object Apr 30 12:37:29.757637 kubelet[3269]: E0430 12:37:29.757602 3269 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-143" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-143' and this object Apr 30 12:37:29.757991 kubelet[3269]: W0430 12:37:29.757938 3269 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-143" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-143' and this object Apr 30 12:37:29.758094 kubelet[3269]: E0430 12:37:29.758003 3269 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-143" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-143' and this object Apr 30 12:37:29.763913 systemd[1]: Created slice kubepods-besteffort-pod5cead4b0_c986_4422_8bf6_2bcab200b7cc.slice - libcontainer container kubepods-besteffort-pod5cead4b0_c986_4422_8bf6_2bcab200b7cc.slice. Apr 30 12:37:29.797626 systemd[1]: Created slice kubepods-burstable-podafafe276_0d2d_47a5_b5b2_3cb901cf3f6b.slice - libcontainer container kubepods-burstable-podafafe276_0d2d_47a5_b5b2_3cb901cf3f6b.slice. Apr 30 12:37:29.888354 kubelet[3269]: I0430 12:37:29.888272 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-lib-modules\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888354 kubelet[3269]: I0430 12:37:29.888349 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-hubble-tls\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888588 kubelet[3269]: I0430 12:37:29.888395 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5cead4b0-c986-4422-8bf6-2bcab200b7cc-kube-proxy\") pod \"kube-proxy-mdzdb\" (UID: \"5cead4b0-c986-4422-8bf6-2bcab200b7cc\") " pod="kube-system/kube-proxy-mdzdb" Apr 30 12:37:29.888588 kubelet[3269]: I0430 12:37:29.888430 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-etc-cni-netd\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888588 kubelet[3269]: I0430 12:37:29.888466 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-xtables-lock\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888588 kubelet[3269]: I0430 12:37:29.888500 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgxf\" (UniqueName: \"kubernetes.io/projected/5cead4b0-c986-4422-8bf6-2bcab200b7cc-kube-api-access-hlgxf\") pod \"kube-proxy-mdzdb\" (UID: \"5cead4b0-c986-4422-8bf6-2bcab200b7cc\") " pod="kube-system/kube-proxy-mdzdb" Apr 30 12:37:29.888588 kubelet[3269]: I0430 12:37:29.888545 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-bpf-maps\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888588 kubelet[3269]: I0430 12:37:29.888581 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-cgroup\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888890 kubelet[3269]: I0430 12:37:29.888616 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-clustermesh-secrets\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888890 kubelet[3269]: I0430 12:37:29.888652 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-host-proc-sys-kernel\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888890 kubelet[3269]: I0430 12:37:29.888686 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-hostproc\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888890 kubelet[3269]: I0430 12:37:29.888722 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-config-path\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888890 kubelet[3269]: I0430 12:37:29.888756 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-host-proc-sys-net\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.888890 kubelet[3269]: I0430 12:37:29.888793 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cni-path\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.889219 kubelet[3269]: I0430 12:37:29.888828 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfg5l\" (UniqueName: \"kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-kube-api-access-sfg5l\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:29.889219 kubelet[3269]: I0430 12:37:29.888862 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cead4b0-c986-4422-8bf6-2bcab200b7cc-lib-modules\") pod \"kube-proxy-mdzdb\" (UID: \"5cead4b0-c986-4422-8bf6-2bcab200b7cc\") " pod="kube-system/kube-proxy-mdzdb" Apr 30 12:37:29.889219 kubelet[3269]: I0430 12:37:29.888896 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cead4b0-c986-4422-8bf6-2bcab200b7cc-xtables-lock\") pod \"kube-proxy-mdzdb\" (UID: \"5cead4b0-c986-4422-8bf6-2bcab200b7cc\") " pod="kube-system/kube-proxy-mdzdb" Apr 30 12:37:29.889219 kubelet[3269]: I0430 12:37:29.888935 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-run\") pod \"cilium-zkz9h\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " pod="kube-system/cilium-zkz9h" Apr 30 12:37:30.178896 kubelet[3269]: I0430 12:37:30.178056 3269 topology_manager.go:215] "Topology Admit Handler" podUID="ceca9ad6-3b09-4089-a90c-abf0268f349e" podNamespace="kube-system" podName="cilium-operator-599987898-9krkv" Apr 30 12:37:30.194348 systemd[1]: Created slice kubepods-besteffort-podceca9ad6_3b09_4089_a90c_abf0268f349e.slice - libcontainer container kubepods-besteffort-podceca9ad6_3b09_4089_a90c_abf0268f349e.slice. Apr 30 12:37:30.291701 kubelet[3269]: I0430 12:37:30.291620 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzhm\" (UniqueName: \"kubernetes.io/projected/ceca9ad6-3b09-4089-a90c-abf0268f349e-kube-api-access-2jzhm\") pod \"cilium-operator-599987898-9krkv\" (UID: \"ceca9ad6-3b09-4089-a90c-abf0268f349e\") " pod="kube-system/cilium-operator-599987898-9krkv" Apr 30 12:37:30.291899 kubelet[3269]: I0430 12:37:30.291730 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ceca9ad6-3b09-4089-a90c-abf0268f349e-cilium-config-path\") pod \"cilium-operator-599987898-9krkv\" (UID: \"ceca9ad6-3b09-4089-a90c-abf0268f349e\") " pod="kube-system/cilium-operator-599987898-9krkv" Apr 30 12:37:31.021406 kubelet[3269]: E0430 12:37:31.021255 3269 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 12:37:31.021406 kubelet[3269]: E0430 12:37:31.021303 3269 projected.go:200] Error preparing data for projected volume kube-api-access-sfg5l for pod kube-system/cilium-zkz9h: failed to sync configmap cache: timed out waiting for the condition Apr 30 12:37:31.021651 kubelet[3269]: E0430 12:37:31.021418 3269 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-kube-api-access-sfg5l podName:afafe276-0d2d-47a5-b5b2-3cb901cf3f6b nodeName:}" failed. No retries permitted until 2025-04-30 12:37:31.521384895 +0000 UTC m=+15.042832161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sfg5l" (UniqueName: "kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-kube-api-access-sfg5l") pod "cilium-zkz9h" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b") : failed to sync configmap cache: timed out waiting for the condition Apr 30 12:37:31.036014 kubelet[3269]: E0430 12:37:31.035501 3269 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 12:37:31.036014 kubelet[3269]: E0430 12:37:31.035582 3269 projected.go:200] Error preparing data for projected volume kube-api-access-hlgxf for pod kube-system/kube-proxy-mdzdb: failed to sync configmap cache: timed out waiting for the condition Apr 30 12:37:31.036014 kubelet[3269]: E0430 12:37:31.035663 3269 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5cead4b0-c986-4422-8bf6-2bcab200b7cc-kube-api-access-hlgxf podName:5cead4b0-c986-4422-8bf6-2bcab200b7cc nodeName:}" failed. No retries permitted until 2025-04-30 12:37:31.535637655 +0000 UTC m=+15.057084921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hlgxf" (UniqueName: "kubernetes.io/projected/5cead4b0-c986-4422-8bf6-2bcab200b7cc-kube-api-access-hlgxf") pod "kube-proxy-mdzdb" (UID: "5cead4b0-c986-4422-8bf6-2bcab200b7cc") : failed to sync configmap cache: timed out waiting for the condition Apr 30 12:37:31.102635 containerd[1942]: time="2025-04-30T12:37:31.102578336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9krkv,Uid:ceca9ad6-3b09-4089-a90c-abf0268f349e,Namespace:kube-system,Attempt:0,}" Apr 30 12:37:31.157231 containerd[1942]: time="2025-04-30T12:37:31.156747752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:37:31.157231 containerd[1942]: time="2025-04-30T12:37:31.156856640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:37:31.157231 containerd[1942]: time="2025-04-30T12:37:31.156893096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:31.158081 containerd[1942]: time="2025-04-30T12:37:31.157922660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:31.199181 systemd[1]: run-containerd-runc-k8s.io-7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6-runc.Mj74Bb.mount: Deactivated successfully. Apr 30 12:37:31.210337 systemd[1]: Started cri-containerd-7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6.scope - libcontainer container 7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6. Apr 30 12:37:31.272924 containerd[1942]: time="2025-04-30T12:37:31.272039408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9krkv,Uid:ceca9ad6-3b09-4089-a90c-abf0268f349e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\"" Apr 30 12:37:31.278615 containerd[1942]: time="2025-04-30T12:37:31.278553032Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 12:37:31.611384 containerd[1942]: time="2025-04-30T12:37:31.611320810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkz9h,Uid:afafe276-0d2d-47a5-b5b2-3cb901cf3f6b,Namespace:kube-system,Attempt:0,}" Apr 30 12:37:31.652344 containerd[1942]: time="2025-04-30T12:37:31.652073290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:37:31.652623 containerd[1942]: time="2025-04-30T12:37:31.652178074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:37:31.653238 containerd[1942]: time="2025-04-30T12:37:31.653095534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:31.653617 containerd[1942]: time="2025-04-30T12:37:31.653348278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:31.680692 systemd[1]: Started cri-containerd-fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa.scope - libcontainer container fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa. Apr 30 12:37:31.729237 containerd[1942]: time="2025-04-30T12:37:31.729181823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkz9h,Uid:afafe276-0d2d-47a5-b5b2-3cb901cf3f6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\"" Apr 30 12:37:31.882938 containerd[1942]: time="2025-04-30T12:37:31.882223607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdzdb,Uid:5cead4b0-c986-4422-8bf6-2bcab200b7cc,Namespace:kube-system,Attempt:0,}" Apr 30 12:37:31.926328 containerd[1942]: time="2025-04-30T12:37:31.926074200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:37:31.926328 containerd[1942]: time="2025-04-30T12:37:31.926195184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:37:31.926328 containerd[1942]: time="2025-04-30T12:37:31.926252556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:31.927057 containerd[1942]: time="2025-04-30T12:37:31.926838012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:37:31.959248 systemd[1]: Started cri-containerd-5661d9cff5eb5f762562fcff53e952f48a76101a5452b20da5a4be11228af7fe.scope - libcontainer container 5661d9cff5eb5f762562fcff53e952f48a76101a5452b20da5a4be11228af7fe. Apr 30 12:37:32.005114 containerd[1942]: time="2025-04-30T12:37:32.004383668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdzdb,Uid:5cead4b0-c986-4422-8bf6-2bcab200b7cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5661d9cff5eb5f762562fcff53e952f48a76101a5452b20da5a4be11228af7fe\"" Apr 30 12:37:32.015507 containerd[1942]: time="2025-04-30T12:37:32.015442052Z" level=info msg="CreateContainer within sandbox \"5661d9cff5eb5f762562fcff53e952f48a76101a5452b20da5a4be11228af7fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 12:37:32.050643 containerd[1942]: time="2025-04-30T12:37:32.050463608Z" level=info msg="CreateContainer within sandbox \"5661d9cff5eb5f762562fcff53e952f48a76101a5452b20da5a4be11228af7fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b8f9fc94a9993870dcc979162ccf2a736e60af29c0d60f1e7abd7c61a0f8fc0\"" Apr 30 12:37:32.051653 containerd[1942]: time="2025-04-30T12:37:32.051593624Z" level=info msg="StartContainer for \"2b8f9fc94a9993870dcc979162ccf2a736e60af29c0d60f1e7abd7c61a0f8fc0\"" Apr 30 12:37:32.135337 systemd[1]: Started cri-containerd-2b8f9fc94a9993870dcc979162ccf2a736e60af29c0d60f1e7abd7c61a0f8fc0.scope - libcontainer container 2b8f9fc94a9993870dcc979162ccf2a736e60af29c0d60f1e7abd7c61a0f8fc0. Apr 30 12:37:32.196456 containerd[1942]: time="2025-04-30T12:37:32.196380549Z" level=info msg="StartContainer for \"2b8f9fc94a9993870dcc979162ccf2a736e60af29c0d60f1e7abd7c61a0f8fc0\" returns successfully" Apr 30 12:37:33.062584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644970323.mount: Deactivated successfully. Apr 30 12:37:33.458359 containerd[1942]: time="2025-04-30T12:37:33.458299943Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:37:33.460493 containerd[1942]: time="2025-04-30T12:37:33.460406459Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 12:37:33.463204 containerd[1942]: time="2025-04-30T12:37:33.463143971Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:37:33.468223 containerd[1942]: time="2025-04-30T12:37:33.468135659Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.189517515s" Apr 30 12:37:33.468223 containerd[1942]: time="2025-04-30T12:37:33.468218147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 12:37:33.470388 containerd[1942]: time="2025-04-30T12:37:33.470316083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 12:37:33.474065 containerd[1942]: time="2025-04-30T12:37:33.473725715Z" level=info msg="CreateContainer within sandbox \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 12:37:33.506606 containerd[1942]: time="2025-04-30T12:37:33.506297699Z" level=info msg="CreateContainer within sandbox \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\"" Apr 30 12:37:33.509010 containerd[1942]: time="2025-04-30T12:37:33.508489523Z" level=info msg="StartContainer for \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\"" Apr 30 12:37:33.564285 systemd[1]: Started cri-containerd-b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59.scope - libcontainer container b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59. Apr 30 12:37:33.618704 containerd[1942]: time="2025-04-30T12:37:33.618487932Z" level=info msg="StartContainer for \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\" returns successfully" Apr 30 12:37:33.914084 kubelet[3269]: I0430 12:37:33.913876 3269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mdzdb" podStartSLOduration=4.913851697 podStartE2EDuration="4.913851697s" podCreationTimestamp="2025-04-30 12:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:37:32.907075212 +0000 UTC m=+16.428522586" watchObservedRunningTime="2025-04-30 12:37:33.913851697 +0000 UTC m=+17.435298963" Apr 30 12:37:36.798884 kubelet[3269]: I0430 12:37:36.798761 3269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9krkv" podStartSLOduration=4.604723561 podStartE2EDuration="6.798738292s" podCreationTimestamp="2025-04-30 12:37:30 +0000 UTC" firstStartedPulling="2025-04-30 12:37:31.275539244 +0000 UTC m=+14.796986522" lastFinishedPulling="2025-04-30 12:37:33.469553951 +0000 UTC m=+16.991001253" observedRunningTime="2025-04-30 12:37:33.915551941 +0000 UTC m=+17.436999231" watchObservedRunningTime="2025-04-30 12:37:36.798738292 +0000 UTC m=+20.320185570" Apr 30 12:37:43.240705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876418358.mount: Deactivated successfully. Apr 30 12:37:45.856699 containerd[1942]: time="2025-04-30T12:37:45.856632361Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:37:45.859769 containerd[1942]: time="2025-04-30T12:37:45.859699117Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 12:37:45.862193 containerd[1942]: time="2025-04-30T12:37:45.862096429Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 12:37:45.867305 containerd[1942]: time="2025-04-30T12:37:45.867163681Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.396774278s" Apr 30 12:37:45.867305 containerd[1942]: time="2025-04-30T12:37:45.867226201Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 12:37:45.874314 containerd[1942]: time="2025-04-30T12:37:45.874259149Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:37:45.901439 containerd[1942]: time="2025-04-30T12:37:45.901286365Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\"" Apr 30 12:37:45.902544 containerd[1942]: time="2025-04-30T12:37:45.902488825Z" level=info msg="StartContainer for \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\"" Apr 30 12:37:45.960058 systemd[1]: run-containerd-runc-k8s.io-b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891-runc.nwZvH0.mount: Deactivated successfully. Apr 30 12:37:45.973380 systemd[1]: Started cri-containerd-b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891.scope - libcontainer container b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891. Apr 30 12:37:46.025188 containerd[1942]: time="2025-04-30T12:37:46.025022134Z" level=info msg="StartContainer for \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\" returns successfully" Apr 30 12:37:46.046184 systemd[1]: cri-containerd-b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891.scope: Deactivated successfully. Apr 30 12:37:46.894018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891-rootfs.mount: Deactivated successfully. Apr 30 12:37:47.190083 containerd[1942]: time="2025-04-30T12:37:47.189873395Z" level=info msg="shim disconnected" id=b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891 namespace=k8s.io Apr 30 12:37:47.190083 containerd[1942]: time="2025-04-30T12:37:47.189985607Z" level=warning msg="cleaning up after shim disconnected" id=b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891 namespace=k8s.io Apr 30 12:37:47.190083 containerd[1942]: time="2025-04-30T12:37:47.190008551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:37:47.956675 containerd[1942]: time="2025-04-30T12:37:47.956625927Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:37:47.998754 containerd[1942]: time="2025-04-30T12:37:47.998426619Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\"" Apr 30 12:37:48.000162 containerd[1942]: time="2025-04-30T12:37:48.000100607Z" level=info msg="StartContainer for \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\"" Apr 30 12:37:48.068272 systemd[1]: Started cri-containerd-a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05.scope - libcontainer container a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05. Apr 30 12:37:48.115505 containerd[1942]: time="2025-04-30T12:37:48.115437456Z" level=info msg="StartContainer for \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\" returns successfully" Apr 30 12:37:48.139061 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 12:37:48.140248 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:37:48.140902 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:37:48.150578 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 12:37:48.151067 systemd[1]: cri-containerd-a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05.scope: Deactivated successfully. Apr 30 12:37:48.187884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 12:37:48.207635 containerd[1942]: time="2025-04-30T12:37:48.207029496Z" level=info msg="shim disconnected" id=a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05 namespace=k8s.io Apr 30 12:37:48.207635 containerd[1942]: time="2025-04-30T12:37:48.207101796Z" level=warning msg="cleaning up after shim disconnected" id=a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05 namespace=k8s.io Apr 30 12:37:48.207635 containerd[1942]: time="2025-04-30T12:37:48.207137400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:37:48.964767 containerd[1942]: time="2025-04-30T12:37:48.964532116Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:37:48.985474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05-rootfs.mount: Deactivated successfully. Apr 30 12:37:49.019776 containerd[1942]: time="2025-04-30T12:37:49.018880536Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\"" Apr 30 12:37:49.026857 containerd[1942]: time="2025-04-30T12:37:49.026600485Z" level=info msg="StartContainer for \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\"" Apr 30 12:37:49.081286 systemd[1]: Started cri-containerd-0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387.scope - libcontainer container 0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387. Apr 30 12:37:49.143385 containerd[1942]: time="2025-04-30T12:37:49.143327965Z" level=info msg="StartContainer for \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\" returns successfully" Apr 30 12:37:49.149818 systemd[1]: cri-containerd-0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387.scope: Deactivated successfully. Apr 30 12:37:49.194769 containerd[1942]: time="2025-04-30T12:37:49.194655121Z" level=info msg="shim disconnected" id=0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387 namespace=k8s.io Apr 30 12:37:49.194769 containerd[1942]: time="2025-04-30T12:37:49.194755045Z" level=warning msg="cleaning up after shim disconnected" id=0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387 namespace=k8s.io Apr 30 12:37:49.195115 containerd[1942]: time="2025-04-30T12:37:49.194775793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:37:49.968767 containerd[1942]: time="2025-04-30T12:37:49.968698277Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:37:49.988851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387-rootfs.mount: Deactivated successfully. Apr 30 12:37:50.006601 containerd[1942]: time="2025-04-30T12:37:50.006416809Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\"" Apr 30 12:37:50.010738 containerd[1942]: time="2025-04-30T12:37:50.009294469Z" level=info msg="StartContainer for \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\"" Apr 30 12:37:50.070494 systemd[1]: Started cri-containerd-d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c.scope - libcontainer container d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c. Apr 30 12:37:50.118204 systemd[1]: cri-containerd-d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c.scope: Deactivated successfully. Apr 30 12:37:50.121325 containerd[1942]: time="2025-04-30T12:37:50.120991946Z" level=info msg="StartContainer for \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\" returns successfully" Apr 30 12:37:50.163665 containerd[1942]: time="2025-04-30T12:37:50.163593362Z" level=info msg="shim disconnected" id=d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c namespace=k8s.io Apr 30 12:37:50.164301 containerd[1942]: time="2025-04-30T12:37:50.164060870Z" level=warning msg="cleaning up after shim disconnected" id=d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c namespace=k8s.io Apr 30 12:37:50.164301 containerd[1942]: time="2025-04-30T12:37:50.164090426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:37:50.983663 containerd[1942]: time="2025-04-30T12:37:50.983609850Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:37:50.989696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c-rootfs.mount: Deactivated successfully. Apr 30 12:37:51.021163 containerd[1942]: time="2025-04-30T12:37:51.020864234Z" level=info msg="CreateContainer within sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\"" Apr 30 12:37:51.025387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount576590221.mount: Deactivated successfully. Apr 30 12:37:51.028904 containerd[1942]: time="2025-04-30T12:37:51.027403430Z" level=info msg="StartContainer for \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\"" Apr 30 12:37:51.089362 systemd[1]: Started cri-containerd-4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25.scope - libcontainer container 4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25. Apr 30 12:37:51.147543 containerd[1942]: time="2025-04-30T12:37:51.147468747Z" level=info msg="StartContainer for \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\" returns successfully" Apr 30 12:37:51.300934 kubelet[3269]: I0430 12:37:51.300569 3269 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 12:37:51.348001 kubelet[3269]: I0430 12:37:51.346403 3269 topology_manager.go:215] "Topology Admit Handler" podUID="4f6e6079-b69e-47c6-9db8-774b6191ab69" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8r8wx" Apr 30 12:37:51.356931 kubelet[3269]: I0430 12:37:51.356872 3269 topology_manager.go:215] "Topology Admit Handler" podUID="bdd7d7ca-7fe8-4d4c-a4c1-14451ac24622" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nclrv" Apr 30 12:37:51.372747 systemd[1]: Created slice kubepods-burstable-pod4f6e6079_b69e_47c6_9db8_774b6191ab69.slice - libcontainer container kubepods-burstable-pod4f6e6079_b69e_47c6_9db8_774b6191ab69.slice. Apr 30 12:37:51.396585 systemd[1]: Created slice kubepods-burstable-podbdd7d7ca_7fe8_4d4c_a4c1_14451ac24622.slice - libcontainer container kubepods-burstable-podbdd7d7ca_7fe8_4d4c_a4c1_14451ac24622.slice. Apr 30 12:37:51.445496 kubelet[3269]: I0430 12:37:51.445207 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f6e6079-b69e-47c6-9db8-774b6191ab69-config-volume\") pod \"coredns-7db6d8ff4d-8r8wx\" (UID: \"4f6e6079-b69e-47c6-9db8-774b6191ab69\") " pod="kube-system/coredns-7db6d8ff4d-8r8wx" Apr 30 12:37:51.445496 kubelet[3269]: I0430 12:37:51.445269 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv7rp\" (UniqueName: \"kubernetes.io/projected/4f6e6079-b69e-47c6-9db8-774b6191ab69-kube-api-access-jv7rp\") pod \"coredns-7db6d8ff4d-8r8wx\" (UID: \"4f6e6079-b69e-47c6-9db8-774b6191ab69\") " pod="kube-system/coredns-7db6d8ff4d-8r8wx" Apr 30 12:37:51.445496 kubelet[3269]: I0430 12:37:51.445324 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2kzz\" (UniqueName: \"kubernetes.io/projected/bdd7d7ca-7fe8-4d4c-a4c1-14451ac24622-kube-api-access-t2kzz\") pod \"coredns-7db6d8ff4d-nclrv\" (UID: \"bdd7d7ca-7fe8-4d4c-a4c1-14451ac24622\") " pod="kube-system/coredns-7db6d8ff4d-nclrv" Apr 30 12:37:51.445496 kubelet[3269]: I0430 12:37:51.445365 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdd7d7ca-7fe8-4d4c-a4c1-14451ac24622-config-volume\") pod \"coredns-7db6d8ff4d-nclrv\" (UID: \"bdd7d7ca-7fe8-4d4c-a4c1-14451ac24622\") " pod="kube-system/coredns-7db6d8ff4d-nclrv" Apr 30 12:37:51.685108 containerd[1942]: time="2025-04-30T12:37:51.684716166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8r8wx,Uid:4f6e6079-b69e-47c6-9db8-774b6191ab69,Namespace:kube-system,Attempt:0,}" Apr 30 12:37:51.704127 containerd[1942]: time="2025-04-30T12:37:51.703333446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nclrv,Uid:bdd7d7ca-7fe8-4d4c-a4c1-14451ac24622,Namespace:kube-system,Attempt:0,}" Apr 30 12:37:54.006135 (udev-worker)[4324]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:37:54.007917 (udev-worker)[4326]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:37:54.009398 systemd-networkd[1855]: cilium_host: Link UP Apr 30 12:37:54.010594 systemd-networkd[1855]: cilium_net: Link UP Apr 30 12:37:54.010928 systemd-networkd[1855]: cilium_net: Gained carrier Apr 30 12:37:54.011257 systemd-networkd[1855]: cilium_host: Gained carrier Apr 30 12:37:54.186175 systemd-networkd[1855]: cilium_vxlan: Link UP Apr 30 12:37:54.186190 systemd-networkd[1855]: cilium_vxlan: Gained carrier Apr 30 12:37:54.642161 systemd-networkd[1855]: cilium_host: Gained IPv6LL Apr 30 12:37:54.682102 kernel: NET: Registered PF_ALG protocol family Apr 30 12:37:55.025146 systemd-networkd[1855]: cilium_net: Gained IPv6LL Apr 30 12:37:55.282264 systemd-networkd[1855]: cilium_vxlan: Gained IPv6LL Apr 30 12:37:56.006823 systemd-networkd[1855]: lxc_health: Link UP Apr 30 12:37:56.019064 systemd-networkd[1855]: lxc_health: Gained carrier Apr 30 12:37:56.316630 systemd-networkd[1855]: lxc2177e94f6947: Link UP Apr 30 12:37:56.320036 kernel: eth0: renamed from tmp45ebe Apr 30 12:37:56.324911 systemd-networkd[1855]: lxc2177e94f6947: Gained carrier Apr 30 12:37:56.784210 systemd-networkd[1855]: lxc6f5f0be26602: Link UP Apr 30 12:37:56.798003 kernel: eth0: renamed from tmpee0ab Apr 30 12:37:56.807345 systemd-networkd[1855]: lxc6f5f0be26602: Gained carrier Apr 30 12:37:57.329327 systemd-networkd[1855]: lxc_health: Gained IPv6LL Apr 30 12:37:57.656416 kubelet[3269]: I0430 12:37:57.656320 3269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zkz9h" podStartSLOduration=14.519460149 podStartE2EDuration="28.656299271s" podCreationTimestamp="2025-04-30 12:37:29 +0000 UTC" firstStartedPulling="2025-04-30 12:37:31.731596451 +0000 UTC m=+15.253043717" lastFinishedPulling="2025-04-30 12:37:45.868435573 +0000 UTC m=+29.389882839" observedRunningTime="2025-04-30 12:37:52.059309296 +0000 UTC m=+35.580756610" watchObservedRunningTime="2025-04-30 12:37:57.656299271 +0000 UTC m=+41.177746561" Apr 30 12:37:57.841701 systemd-networkd[1855]: lxc6f5f0be26602: Gained IPv6LL Apr 30 12:37:58.353770 systemd-networkd[1855]: lxc2177e94f6947: Gained IPv6LL Apr 30 12:37:58.741170 systemd[1]: Started sshd@7-172.31.17.143:22-139.178.89.65:34388.service - OpenSSH per-connection server daemon (139.178.89.65:34388). Apr 30 12:37:59.020511 sshd[4720]: Accepted publickey for core from 139.178.89.65 port 34388 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:37:59.022539 sshd-session[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:37:59.035492 systemd-logind[1925]: New session 8 of user core. Apr 30 12:37:59.042286 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 12:37:59.431729 sshd[4725]: Connection closed by 139.178.89.65 port 34388 Apr 30 12:37:59.434372 sshd-session[4720]: pam_unix(sshd:session): session closed for user core Apr 30 12:37:59.443460 systemd[1]: sshd@7-172.31.17.143:22-139.178.89.65:34388.service: Deactivated successfully. Apr 30 12:37:59.450259 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 12:37:59.457785 systemd-logind[1925]: Session 8 logged out. Waiting for processes to exit. Apr 30 12:37:59.461057 systemd-logind[1925]: Removed session 8. Apr 30 12:38:00.977472 ntpd[1917]: Listen normally on 8 cilium_host 192.168.0.138:123 Apr 30 12:38:00.978635 ntpd[1917]: 30 Apr 12:38:00 ntpd[1917]: Listen normally on 8 cilium_host 192.168.0.138:123 Apr 30 12:38:00.978635 ntpd[1917]: 30 Apr 12:38:00 ntpd[1917]: Listen normally on 9 cilium_net [fe80::44a8:8fff:fe68:4623%4]:123 Apr 30 12:38:00.978635 ntpd[1917]: 30 Apr 12:38:00 ntpd[1917]: Listen normally on 10 cilium_host [fe80::4424:47ff:fee3:fea5%5]:123 Apr 30 12:38:00.978635 ntpd[1917]: 30 Apr 12:38:00 ntpd[1917]: Listen normally on 11 cilium_vxlan [fe80::6c35:58ff:fe87:f1%6]:123 Apr 30 12:38:00.978635 ntpd[1917]: 30 Apr 12:38:00 ntpd[1917]: Listen normally on 12 lxc_health [fe80::6416:19ff:fefc:f1ab%8]:123 Apr 30 12:38:00.978635 ntpd[1917]: 30 Apr 12:38:00 ntpd[1917]: Listen normally on 13 lxc2177e94f6947 [fe80::9480:c6ff:fe9c:3ca6%10]:123 Apr 30 12:38:00.978635 ntpd[1917]: 30 Apr 12:38:00 ntpd[1917]: Listen normally on 14 lxc6f5f0be26602 [fe80::3856:d2ff:fe0e:4d32%12]:123 Apr 30 12:38:00.977622 ntpd[1917]: Listen normally on 9 cilium_net [fe80::44a8:8fff:fe68:4623%4]:123 Apr 30 12:38:00.977703 ntpd[1917]: Listen normally on 10 cilium_host [fe80::4424:47ff:fee3:fea5%5]:123 Apr 30 12:38:00.977776 ntpd[1917]: Listen normally on 11 cilium_vxlan [fe80::6c35:58ff:fe87:f1%6]:123 Apr 30 12:38:00.977843 ntpd[1917]: Listen normally on 12 lxc_health [fe80::6416:19ff:fefc:f1ab%8]:123 Apr 30 12:38:00.977910 ntpd[1917]: Listen normally on 13 lxc2177e94f6947 [fe80::9480:c6ff:fe9c:3ca6%10]:123 Apr 30 12:38:00.978009 ntpd[1917]: Listen normally on 14 lxc6f5f0be26602 [fe80::3856:d2ff:fe0e:4d32%12]:123 Apr 30 12:38:04.493152 systemd[1]: Started sshd@8-172.31.17.143:22-139.178.89.65:34390.service - OpenSSH per-connection server daemon (139.178.89.65:34390). Apr 30 12:38:04.775247 sshd[4745]: Accepted publickey for core from 139.178.89.65 port 34390 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:04.776576 sshd-session[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:04.795118 systemd-logind[1925]: New session 9 of user core. Apr 30 12:38:04.800041 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 12:38:05.194026 sshd[4747]: Connection closed by 139.178.89.65 port 34390 Apr 30 12:38:05.193891 sshd-session[4745]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:05.204453 systemd[1]: sshd@8-172.31.17.143:22-139.178.89.65:34390.service: Deactivated successfully. Apr 30 12:38:05.212340 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 12:38:05.218995 systemd-logind[1925]: Session 9 logged out. Waiting for processes to exit. Apr 30 12:38:05.222439 systemd-logind[1925]: Removed session 9. Apr 30 12:38:05.466098 containerd[1942]: time="2025-04-30T12:38:05.463163646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:38:05.466098 containerd[1942]: time="2025-04-30T12:38:05.463265310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:38:05.466098 containerd[1942]: time="2025-04-30T12:38:05.463301526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:38:05.466098 containerd[1942]: time="2025-04-30T12:38:05.463470342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:38:05.538572 systemd[1]: Started cri-containerd-ee0abf45e8ad1b2c4039195038a845ef833c2e7b75e816d85cdec42b99f0dcf8.scope - libcontainer container ee0abf45e8ad1b2c4039195038a845ef833c2e7b75e816d85cdec42b99f0dcf8. Apr 30 12:38:05.551479 containerd[1942]: time="2025-04-30T12:38:05.550866811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:38:05.551479 containerd[1942]: time="2025-04-30T12:38:05.551011783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:38:05.551479 containerd[1942]: time="2025-04-30T12:38:05.551039983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:38:05.551479 containerd[1942]: time="2025-04-30T12:38:05.551199619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:38:05.625585 systemd[1]: Started cri-containerd-45ebef19d87ff692457c5bd32708e805d47cfd6d42752bdd69cc3fbd907e8af6.scope - libcontainer container 45ebef19d87ff692457c5bd32708e805d47cfd6d42752bdd69cc3fbd907e8af6. Apr 30 12:38:05.729025 containerd[1942]: time="2025-04-30T12:38:05.728821639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8r8wx,Uid:4f6e6079-b69e-47c6-9db8-774b6191ab69,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee0abf45e8ad1b2c4039195038a845ef833c2e7b75e816d85cdec42b99f0dcf8\"" Apr 30 12:38:05.744772 containerd[1942]: time="2025-04-30T12:38:05.744308792Z" level=info msg="CreateContainer within sandbox \"ee0abf45e8ad1b2c4039195038a845ef833c2e7b75e816d85cdec42b99f0dcf8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:38:05.761401 containerd[1942]: time="2025-04-30T12:38:05.761341952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nclrv,Uid:bdd7d7ca-7fe8-4d4c-a4c1-14451ac24622,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ebef19d87ff692457c5bd32708e805d47cfd6d42752bdd69cc3fbd907e8af6\"" Apr 30 12:38:05.774414 containerd[1942]: time="2025-04-30T12:38:05.774361760Z" level=info msg="CreateContainer within sandbox \"45ebef19d87ff692457c5bd32708e805d47cfd6d42752bdd69cc3fbd907e8af6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 12:38:05.807266 containerd[1942]: time="2025-04-30T12:38:05.807193220Z" level=info msg="CreateContainer within sandbox \"ee0abf45e8ad1b2c4039195038a845ef833c2e7b75e816d85cdec42b99f0dcf8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71e43dfac40eafaa1efd7ea5e7c690b8c5e50f9c205cf983934a9d02bdc25f11\"" Apr 30 12:38:05.810159 containerd[1942]: time="2025-04-30T12:38:05.809503472Z" level=info msg="StartContainer for \"71e43dfac40eafaa1efd7ea5e7c690b8c5e50f9c205cf983934a9d02bdc25f11\"" Apr 30 12:38:05.826076 containerd[1942]: time="2025-04-30T12:38:05.826008008Z" level=info msg="CreateContainer within sandbox \"45ebef19d87ff692457c5bd32708e805d47cfd6d42752bdd69cc3fbd907e8af6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ccaae82c3d27f33717fdfef79b92d8a70f245746697921f0289e93b8fa92e91f\"" Apr 30 12:38:05.830144 containerd[1942]: time="2025-04-30T12:38:05.828487424Z" level=info msg="StartContainer for \"ccaae82c3d27f33717fdfef79b92d8a70f245746697921f0289e93b8fa92e91f\"" Apr 30 12:38:05.906389 systemd[1]: Started cri-containerd-71e43dfac40eafaa1efd7ea5e7c690b8c5e50f9c205cf983934a9d02bdc25f11.scope - libcontainer container 71e43dfac40eafaa1efd7ea5e7c690b8c5e50f9c205cf983934a9d02bdc25f11. Apr 30 12:38:05.940288 systemd[1]: Started cri-containerd-ccaae82c3d27f33717fdfef79b92d8a70f245746697921f0289e93b8fa92e91f.scope - libcontainer container ccaae82c3d27f33717fdfef79b92d8a70f245746697921f0289e93b8fa92e91f. Apr 30 12:38:06.028432 containerd[1942]: time="2025-04-30T12:38:06.027890957Z" level=info msg="StartContainer for \"71e43dfac40eafaa1efd7ea5e7c690b8c5e50f9c205cf983934a9d02bdc25f11\" returns successfully" Apr 30 12:38:06.036219 containerd[1942]: time="2025-04-30T12:38:06.036149345Z" level=info msg="StartContainer for \"ccaae82c3d27f33717fdfef79b92d8a70f245746697921f0289e93b8fa92e91f\" returns successfully" Apr 30 12:38:06.082456 kubelet[3269]: I0430 12:38:06.082363 3269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8r8wx" podStartSLOduration=36.082341269 podStartE2EDuration="36.082341269s" podCreationTimestamp="2025-04-30 12:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:38:06.075811325 +0000 UTC m=+49.597258699" watchObservedRunningTime="2025-04-30 12:38:06.082341269 +0000 UTC m=+49.603788523" Apr 30 12:38:06.131569 kubelet[3269]: I0430 12:38:06.131411 3269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nclrv" podStartSLOduration=36.131376281 podStartE2EDuration="36.131376281s" podCreationTimestamp="2025-04-30 12:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:38:06.124004957 +0000 UTC m=+49.645452271" watchObservedRunningTime="2025-04-30 12:38:06.131376281 +0000 UTC m=+49.652823547" Apr 30 12:38:06.476941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811446516.mount: Deactivated successfully. Apr 30 12:38:10.248542 systemd[1]: Started sshd@9-172.31.17.143:22-139.178.89.65:32982.service - OpenSSH per-connection server daemon (139.178.89.65:32982). Apr 30 12:38:10.531710 sshd[4938]: Accepted publickey for core from 139.178.89.65 port 32982 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:10.534625 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:10.544864 systemd-logind[1925]: New session 10 of user core. Apr 30 12:38:10.550280 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 12:38:10.847521 sshd[4940]: Connection closed by 139.178.89.65 port 32982 Apr 30 12:38:10.848318 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:10.855324 systemd[1]: sshd@9-172.31.17.143:22-139.178.89.65:32982.service: Deactivated successfully. Apr 30 12:38:10.860460 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 12:38:10.862483 systemd-logind[1925]: Session 10 logged out. Waiting for processes to exit. Apr 30 12:38:10.864729 systemd-logind[1925]: Removed session 10. Apr 30 12:38:15.914552 systemd[1]: Started sshd@10-172.31.17.143:22-139.178.89.65:32996.service - OpenSSH per-connection server daemon (139.178.89.65:32996). Apr 30 12:38:16.188323 sshd[4953]: Accepted publickey for core from 139.178.89.65 port 32996 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:16.191120 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:16.201656 systemd-logind[1925]: New session 11 of user core. Apr 30 12:38:16.212248 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 12:38:16.496313 sshd[4955]: Connection closed by 139.178.89.65 port 32996 Apr 30 12:38:16.497177 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:16.503640 systemd[1]: sshd@10-172.31.17.143:22-139.178.89.65:32996.service: Deactivated successfully. Apr 30 12:38:16.508578 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 12:38:16.510843 systemd-logind[1925]: Session 11 logged out. Waiting for processes to exit. Apr 30 12:38:16.512901 systemd-logind[1925]: Removed session 11. Apr 30 12:38:21.558141 systemd[1]: Started sshd@11-172.31.17.143:22-139.178.89.65:40736.service - OpenSSH per-connection server daemon (139.178.89.65:40736). Apr 30 12:38:21.832927 sshd[4970]: Accepted publickey for core from 139.178.89.65 port 40736 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:21.835201 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:21.844388 systemd-logind[1925]: New session 12 of user core. Apr 30 12:38:21.852259 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 12:38:22.145988 sshd[4972]: Connection closed by 139.178.89.65 port 40736 Apr 30 12:38:22.146516 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:22.152789 systemd-logind[1925]: Session 12 logged out. Waiting for processes to exit. Apr 30 12:38:22.154362 systemd[1]: sshd@11-172.31.17.143:22-139.178.89.65:40736.service: Deactivated successfully. Apr 30 12:38:22.158234 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 12:38:22.161909 systemd-logind[1925]: Removed session 12. Apr 30 12:38:22.204482 systemd[1]: Started sshd@12-172.31.17.143:22-139.178.89.65:40750.service - OpenSSH per-connection server daemon (139.178.89.65:40750). Apr 30 12:38:22.479553 sshd[4985]: Accepted publickey for core from 139.178.89.65 port 40750 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:22.482055 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:22.491469 systemd-logind[1925]: New session 13 of user core. Apr 30 12:38:22.496231 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 12:38:22.862689 sshd[4987]: Connection closed by 139.178.89.65 port 40750 Apr 30 12:38:22.864000 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:22.871591 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 12:38:22.874022 systemd[1]: sshd@12-172.31.17.143:22-139.178.89.65:40750.service: Deactivated successfully. Apr 30 12:38:22.884599 systemd-logind[1925]: Session 13 logged out. Waiting for processes to exit. Apr 30 12:38:22.886880 systemd-logind[1925]: Removed session 13. Apr 30 12:38:22.921466 systemd[1]: Started sshd@13-172.31.17.143:22-139.178.89.65:40762.service - OpenSSH per-connection server daemon (139.178.89.65:40762). Apr 30 12:38:23.193410 sshd[4997]: Accepted publickey for core from 139.178.89.65 port 40762 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:23.196250 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:23.206421 systemd-logind[1925]: New session 14 of user core. Apr 30 12:38:23.213211 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 12:38:23.524072 sshd[4999]: Connection closed by 139.178.89.65 port 40762 Apr 30 12:38:23.523211 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:23.529505 systemd[1]: sshd@13-172.31.17.143:22-139.178.89.65:40762.service: Deactivated successfully. Apr 30 12:38:23.534361 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 12:38:23.537273 systemd-logind[1925]: Session 14 logged out. Waiting for processes to exit. Apr 30 12:38:23.539570 systemd-logind[1925]: Removed session 14. Apr 30 12:38:28.582803 systemd[1]: Started sshd@14-172.31.17.143:22-139.178.89.65:47580.service - OpenSSH per-connection server daemon (139.178.89.65:47580). Apr 30 12:38:28.860231 sshd[5012]: Accepted publickey for core from 139.178.89.65 port 47580 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:28.862680 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:28.871530 systemd-logind[1925]: New session 15 of user core. Apr 30 12:38:28.878237 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 12:38:29.187123 sshd[5014]: Connection closed by 139.178.89.65 port 47580 Apr 30 12:38:29.188357 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:29.196625 systemd[1]: sshd@14-172.31.17.143:22-139.178.89.65:47580.service: Deactivated successfully. Apr 30 12:38:29.201680 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 12:38:29.203411 systemd-logind[1925]: Session 15 logged out. Waiting for processes to exit. Apr 30 12:38:29.205497 systemd-logind[1925]: Removed session 15. Apr 30 12:38:34.243470 systemd[1]: Started sshd@15-172.31.17.143:22-139.178.89.65:47594.service - OpenSSH per-connection server daemon (139.178.89.65:47594). Apr 30 12:38:34.514498 sshd[5029]: Accepted publickey for core from 139.178.89.65 port 47594 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:34.517306 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:34.527108 systemd-logind[1925]: New session 16 of user core. Apr 30 12:38:34.537256 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 12:38:34.830512 sshd[5031]: Connection closed by 139.178.89.65 port 47594 Apr 30 12:38:34.831685 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:34.840165 systemd-logind[1925]: Session 16 logged out. Waiting for processes to exit. Apr 30 12:38:34.841220 systemd[1]: sshd@15-172.31.17.143:22-139.178.89.65:47594.service: Deactivated successfully. Apr 30 12:38:34.845413 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 12:38:34.848532 systemd-logind[1925]: Removed session 16. Apr 30 12:38:39.893547 systemd[1]: Started sshd@16-172.31.17.143:22-139.178.89.65:59844.service - OpenSSH per-connection server daemon (139.178.89.65:59844). Apr 30 12:38:40.173504 sshd[5044]: Accepted publickey for core from 139.178.89.65 port 59844 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:40.176103 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:40.184919 systemd-logind[1925]: New session 17 of user core. Apr 30 12:38:40.191243 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 12:38:40.491202 sshd[5046]: Connection closed by 139.178.89.65 port 59844 Apr 30 12:38:40.492187 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:40.498890 systemd[1]: sshd@16-172.31.17.143:22-139.178.89.65:59844.service: Deactivated successfully. Apr 30 12:38:40.502559 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 12:38:40.504159 systemd-logind[1925]: Session 17 logged out. Waiting for processes to exit. Apr 30 12:38:40.506513 systemd-logind[1925]: Removed session 17. Apr 30 12:38:40.547514 systemd[1]: Started sshd@17-172.31.17.143:22-139.178.89.65:59848.service - OpenSSH per-connection server daemon (139.178.89.65:59848). Apr 30 12:38:40.817800 sshd[5059]: Accepted publickey for core from 139.178.89.65 port 59848 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:40.819872 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:40.828326 systemd-logind[1925]: New session 18 of user core. Apr 30 12:38:40.835236 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 12:38:41.192406 sshd[5061]: Connection closed by 139.178.89.65 port 59848 Apr 30 12:38:41.193265 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:41.199434 systemd[1]: sshd@17-172.31.17.143:22-139.178.89.65:59848.service: Deactivated successfully. Apr 30 12:38:41.200067 systemd-logind[1925]: Session 18 logged out. Waiting for processes to exit. Apr 30 12:38:41.204312 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 12:38:41.208838 systemd-logind[1925]: Removed session 18. Apr 30 12:38:41.252481 systemd[1]: Started sshd@18-172.31.17.143:22-139.178.89.65:59864.service - OpenSSH per-connection server daemon (139.178.89.65:59864). Apr 30 12:38:41.527552 sshd[5071]: Accepted publickey for core from 139.178.89.65 port 59864 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:41.530130 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:41.538767 systemd-logind[1925]: New session 19 of user core. Apr 30 12:38:41.549256 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 12:38:44.288273 sshd[5073]: Connection closed by 139.178.89.65 port 59864 Apr 30 12:38:44.289273 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:44.302556 systemd-logind[1925]: Session 19 logged out. Waiting for processes to exit. Apr 30 12:38:44.303068 systemd[1]: sshd@18-172.31.17.143:22-139.178.89.65:59864.service: Deactivated successfully. Apr 30 12:38:44.311892 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 12:38:44.315219 systemd-logind[1925]: Removed session 19. Apr 30 12:38:44.344613 systemd[1]: Started sshd@19-172.31.17.143:22-139.178.89.65:59878.service - OpenSSH per-connection server daemon (139.178.89.65:59878). Apr 30 12:38:44.621954 sshd[5090]: Accepted publickey for core from 139.178.89.65 port 59878 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:44.624640 sshd-session[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:44.634675 systemd-logind[1925]: New session 20 of user core. Apr 30 12:38:44.642284 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 12:38:45.165336 sshd[5092]: Connection closed by 139.178.89.65 port 59878 Apr 30 12:38:45.166309 sshd-session[5090]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:45.173073 systemd[1]: sshd@19-172.31.17.143:22-139.178.89.65:59878.service: Deactivated successfully. Apr 30 12:38:45.178449 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 12:38:45.180577 systemd-logind[1925]: Session 20 logged out. Waiting for processes to exit. Apr 30 12:38:45.182936 systemd-logind[1925]: Removed session 20. Apr 30 12:38:45.225517 systemd[1]: Started sshd@20-172.31.17.143:22-139.178.89.65:59894.service - OpenSSH per-connection server daemon (139.178.89.65:59894). Apr 30 12:38:45.506000 sshd[5102]: Accepted publickey for core from 139.178.89.65 port 59894 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:45.508551 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:45.516669 systemd-logind[1925]: New session 21 of user core. Apr 30 12:38:45.528247 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 12:38:45.813382 sshd[5104]: Connection closed by 139.178.89.65 port 59894 Apr 30 12:38:45.814638 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:45.821098 systemd[1]: sshd@20-172.31.17.143:22-139.178.89.65:59894.service: Deactivated successfully. Apr 30 12:38:45.826292 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 12:38:45.829710 systemd-logind[1925]: Session 21 logged out. Waiting for processes to exit. Apr 30 12:38:45.832716 systemd-logind[1925]: Removed session 21. Apr 30 12:38:50.870500 systemd[1]: Started sshd@21-172.31.17.143:22-139.178.89.65:45064.service - OpenSSH per-connection server daemon (139.178.89.65:45064). Apr 30 12:38:51.138347 sshd[5116]: Accepted publickey for core from 139.178.89.65 port 45064 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:51.141210 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:51.150384 systemd-logind[1925]: New session 22 of user core. Apr 30 12:38:51.159245 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 12:38:51.443855 sshd[5118]: Connection closed by 139.178.89.65 port 45064 Apr 30 12:38:51.444739 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:51.451564 systemd[1]: sshd@21-172.31.17.143:22-139.178.89.65:45064.service: Deactivated successfully. Apr 30 12:38:51.458544 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 12:38:51.459878 systemd-logind[1925]: Session 22 logged out. Waiting for processes to exit. Apr 30 12:38:51.463073 systemd-logind[1925]: Removed session 22. Apr 30 12:38:56.501691 systemd[1]: Started sshd@22-172.31.17.143:22-139.178.89.65:45068.service - OpenSSH per-connection server daemon (139.178.89.65:45068). Apr 30 12:38:56.780274 sshd[5133]: Accepted publickey for core from 139.178.89.65 port 45068 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:38:56.784230 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:38:56.792902 systemd-logind[1925]: New session 23 of user core. Apr 30 12:38:56.801277 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 12:38:57.091084 sshd[5135]: Connection closed by 139.178.89.65 port 45068 Apr 30 12:38:57.091924 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Apr 30 12:38:57.098323 systemd[1]: sshd@22-172.31.17.143:22-139.178.89.65:45068.service: Deactivated successfully. Apr 30 12:38:57.102505 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 12:38:57.104103 systemd-logind[1925]: Session 23 logged out. Waiting for processes to exit. Apr 30 12:38:57.106356 systemd-logind[1925]: Removed session 23. Apr 30 12:39:02.149432 systemd[1]: Started sshd@23-172.31.17.143:22-139.178.89.65:35268.service - OpenSSH per-connection server daemon (139.178.89.65:35268). Apr 30 12:39:02.417492 sshd[5149]: Accepted publickey for core from 139.178.89.65 port 35268 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:39:02.419948 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:02.429245 systemd-logind[1925]: New session 24 of user core. Apr 30 12:39:02.439257 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 12:39:02.729678 sshd[5153]: Connection closed by 139.178.89.65 port 35268 Apr 30 12:39:02.730957 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:02.738437 systemd[1]: sshd@23-172.31.17.143:22-139.178.89.65:35268.service: Deactivated successfully. Apr 30 12:39:02.742650 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 12:39:02.744343 systemd-logind[1925]: Session 24 logged out. Waiting for processes to exit. Apr 30 12:39:02.746326 systemd-logind[1925]: Removed session 24. Apr 30 12:39:07.788524 systemd[1]: Started sshd@24-172.31.17.143:22-139.178.89.65:52926.service - OpenSSH per-connection server daemon (139.178.89.65:52926). Apr 30 12:39:08.066173 sshd[5166]: Accepted publickey for core from 139.178.89.65 port 52926 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:39:08.068549 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:08.078082 systemd-logind[1925]: New session 25 of user core. Apr 30 12:39:08.087251 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 12:39:08.376469 sshd[5168]: Connection closed by 139.178.89.65 port 52926 Apr 30 12:39:08.377443 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:08.384907 systemd[1]: sshd@24-172.31.17.143:22-139.178.89.65:52926.service: Deactivated successfully. Apr 30 12:39:08.388298 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 12:39:08.390056 systemd-logind[1925]: Session 25 logged out. Waiting for processes to exit. Apr 30 12:39:08.392689 systemd-logind[1925]: Removed session 25. Apr 30 12:39:08.433505 systemd[1]: Started sshd@25-172.31.17.143:22-139.178.89.65:52930.service - OpenSSH per-connection server daemon (139.178.89.65:52930). Apr 30 12:39:08.706364 sshd[5180]: Accepted publickey for core from 139.178.89.65 port 52930 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:39:08.708531 sshd-session[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:08.717368 systemd-logind[1925]: New session 26 of user core. Apr 30 12:39:08.728237 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 12:39:12.085341 containerd[1942]: time="2025-04-30T12:39:12.084389685Z" level=info msg="StopContainer for \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\" with timeout 30 (s)" Apr 30 12:39:12.088735 containerd[1942]: time="2025-04-30T12:39:12.088491189Z" level=info msg="Stop container \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\" with signal terminated" Apr 30 12:39:12.125585 systemd[1]: cri-containerd-b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59.scope: Deactivated successfully. Apr 30 12:39:12.158234 containerd[1942]: time="2025-04-30T12:39:12.157661877Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 12:39:12.179656 containerd[1942]: time="2025-04-30T12:39:12.179607718Z" level=info msg="StopContainer for \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\" with timeout 2 (s)" Apr 30 12:39:12.180707 containerd[1942]: time="2025-04-30T12:39:12.180658966Z" level=info msg="Stop container \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\" with signal terminated" Apr 30 12:39:12.190731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59-rootfs.mount: Deactivated successfully. Apr 30 12:39:12.202656 systemd-networkd[1855]: lxc_health: Link DOWN Apr 30 12:39:12.202669 systemd-networkd[1855]: lxc_health: Lost carrier Apr 30 12:39:12.216751 containerd[1942]: time="2025-04-30T12:39:12.216333802Z" level=info msg="shim disconnected" id=b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59 namespace=k8s.io Apr 30 12:39:12.216751 containerd[1942]: time="2025-04-30T12:39:12.216409942Z" level=warning msg="cleaning up after shim disconnected" id=b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59 namespace=k8s.io Apr 30 12:39:12.216751 containerd[1942]: time="2025-04-30T12:39:12.216437422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:12.239612 systemd[1]: cri-containerd-4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25.scope: Deactivated successfully. Apr 30 12:39:12.240257 systemd[1]: cri-containerd-4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25.scope: Consumed 14.623s CPU time, 126.5M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 12:39:12.263855 containerd[1942]: time="2025-04-30T12:39:12.263721886Z" level=info msg="StopContainer for \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\" returns successfully" Apr 30 12:39:12.265169 containerd[1942]: time="2025-04-30T12:39:12.264871930Z" level=info msg="StopPodSandbox for \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\"" Apr 30 12:39:12.265169 containerd[1942]: time="2025-04-30T12:39:12.265018678Z" level=info msg="Container to stop \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:12.274163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6-shm.mount: Deactivated successfully. Apr 30 12:39:12.285836 systemd[1]: cri-containerd-7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6.scope: Deactivated successfully. Apr 30 12:39:12.307377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25-rootfs.mount: Deactivated successfully. Apr 30 12:39:12.317021 containerd[1942]: time="2025-04-30T12:39:12.316775110Z" level=info msg="shim disconnected" id=4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25 namespace=k8s.io Apr 30 12:39:12.317021 containerd[1942]: time="2025-04-30T12:39:12.316851862Z" level=warning msg="cleaning up after shim disconnected" id=4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25 namespace=k8s.io Apr 30 12:39:12.317021 containerd[1942]: time="2025-04-30T12:39:12.316871086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:12.345277 containerd[1942]: time="2025-04-30T12:39:12.345171454Z" level=info msg="shim disconnected" id=7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6 namespace=k8s.io Apr 30 12:39:12.345277 containerd[1942]: time="2025-04-30T12:39:12.345263182Z" level=warning msg="cleaning up after shim disconnected" id=7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6 namespace=k8s.io Apr 30 12:39:12.345600 containerd[1942]: time="2025-04-30T12:39:12.345285058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:12.357752 containerd[1942]: time="2025-04-30T12:39:12.357369310Z" level=info msg="StopContainer for \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\" returns successfully" Apr 30 12:39:12.359136 containerd[1942]: time="2025-04-30T12:39:12.358783486Z" level=info msg="StopPodSandbox for \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\"" Apr 30 12:39:12.359136 containerd[1942]: time="2025-04-30T12:39:12.358848178Z" level=info msg="Container to stop \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:12.359136 containerd[1942]: time="2025-04-30T12:39:12.358872610Z" level=info msg="Container to stop \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:12.359136 containerd[1942]: time="2025-04-30T12:39:12.358899214Z" level=info msg="Container to stop \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:12.359136 containerd[1942]: time="2025-04-30T12:39:12.358922962Z" level=info msg="Container to stop \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:12.359136 containerd[1942]: time="2025-04-30T12:39:12.358944238Z" level=info msg="Container to stop \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 12:39:12.375551 systemd[1]: cri-containerd-fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa.scope: Deactivated successfully. Apr 30 12:39:12.384889 containerd[1942]: time="2025-04-30T12:39:12.384789083Z" level=info msg="TearDown network for sandbox \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\" successfully" Apr 30 12:39:12.384889 containerd[1942]: time="2025-04-30T12:39:12.384878879Z" level=info msg="StopPodSandbox for \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\" returns successfully" Apr 30 12:39:12.434650 kubelet[3269]: I0430 12:39:12.434588 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ceca9ad6-3b09-4089-a90c-abf0268f349e-cilium-config-path\") pod \"ceca9ad6-3b09-4089-a90c-abf0268f349e\" (UID: \"ceca9ad6-3b09-4089-a90c-abf0268f349e\") " Apr 30 12:39:12.436742 kubelet[3269]: I0430 12:39:12.434657 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jzhm\" (UniqueName: \"kubernetes.io/projected/ceca9ad6-3b09-4089-a90c-abf0268f349e-kube-api-access-2jzhm\") pod \"ceca9ad6-3b09-4089-a90c-abf0268f349e\" (UID: \"ceca9ad6-3b09-4089-a90c-abf0268f349e\") " Apr 30 12:39:12.442092 containerd[1942]: time="2025-04-30T12:39:12.440492687Z" level=info msg="shim disconnected" id=fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa namespace=k8s.io Apr 30 12:39:12.442092 containerd[1942]: time="2025-04-30T12:39:12.440572283Z" level=warning msg="cleaning up after shim disconnected" id=fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa namespace=k8s.io Apr 30 12:39:12.442092 containerd[1942]: time="2025-04-30T12:39:12.440594195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:12.443054 kubelet[3269]: I0430 12:39:12.442890 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceca9ad6-3b09-4089-a90c-abf0268f349e-kube-api-access-2jzhm" (OuterVolumeSpecName: "kube-api-access-2jzhm") pod "ceca9ad6-3b09-4089-a90c-abf0268f349e" (UID: "ceca9ad6-3b09-4089-a90c-abf0268f349e"). InnerVolumeSpecName "kube-api-access-2jzhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:39:12.449200 kubelet[3269]: I0430 12:39:12.449098 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceca9ad6-3b09-4089-a90c-abf0268f349e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ceca9ad6-3b09-4089-a90c-abf0268f349e" (UID: "ceca9ad6-3b09-4089-a90c-abf0268f349e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:39:12.465446 containerd[1942]: time="2025-04-30T12:39:12.465394871Z" level=info msg="TearDown network for sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" successfully" Apr 30 12:39:12.465618 containerd[1942]: time="2025-04-30T12:39:12.465590123Z" level=info msg="StopPodSandbox for \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" returns successfully" Apr 30 12:39:12.536023 kubelet[3269]: I0430 12:39:12.534851 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-etc-cni-netd\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536023 kubelet[3269]: I0430 12:39:12.534921 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-host-proc-sys-kernel\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536023 kubelet[3269]: I0430 12:39:12.534958 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cni-path\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536023 kubelet[3269]: I0430 12:39:12.535025 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-run\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536023 kubelet[3269]: I0430 12:39:12.535020 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.536023 kubelet[3269]: I0430 12:39:12.535071 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-hubble-tls\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536456 kubelet[3269]: I0430 12:39:12.535103 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-cgroup\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536456 kubelet[3269]: I0430 12:39:12.535106 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cni-path" (OuterVolumeSpecName: "cni-path") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.536456 kubelet[3269]: I0430 12:39:12.535137 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-bpf-maps\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536456 kubelet[3269]: I0430 12:39:12.535143 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.536456 kubelet[3269]: I0430 12:39:12.535176 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-config-path\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536715 kubelet[3269]: I0430 12:39:12.535182 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.536715 kubelet[3269]: I0430 12:39:12.535209 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-host-proc-sys-net\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536715 kubelet[3269]: I0430 12:39:12.535216 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.536715 kubelet[3269]: I0430 12:39:12.535246 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-clustermesh-secrets\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.536715 kubelet[3269]: I0430 12:39:12.535282 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-lib-modules\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.537042 kubelet[3269]: I0430 12:39:12.535318 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfg5l\" (UniqueName: \"kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-kube-api-access-sfg5l\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.537042 kubelet[3269]: I0430 12:39:12.535353 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-xtables-lock\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.537042 kubelet[3269]: I0430 12:39:12.535385 3269 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-hostproc\") pod \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\" (UID: \"afafe276-0d2d-47a5-b5b2-3cb901cf3f6b\") " Apr 30 12:39:12.537042 kubelet[3269]: I0430 12:39:12.535442 3269 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-run\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.537042 kubelet[3269]: I0430 12:39:12.535465 3269 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-cgroup\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.537042 kubelet[3269]: I0430 12:39:12.535489 3269 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ceca9ad6-3b09-4089-a90c-abf0268f349e-cilium-config-path\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.537042 kubelet[3269]: I0430 12:39:12.535532 3269 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2jzhm\" (UniqueName: \"kubernetes.io/projected/ceca9ad6-3b09-4089-a90c-abf0268f349e-kube-api-access-2jzhm\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.537399 kubelet[3269]: I0430 12:39:12.535553 3269 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-etc-cni-netd\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.537399 kubelet[3269]: I0430 12:39:12.535591 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-hostproc" (OuterVolumeSpecName: "hostproc") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.537399 kubelet[3269]: I0430 12:39:12.535631 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.539632 kubelet[3269]: I0430 12:39:12.539565 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:39:12.543141 kubelet[3269]: I0430 12:39:12.543078 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 12:39:12.543446 kubelet[3269]: I0430 12:39:12.543409 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.545130 kubelet[3269]: I0430 12:39:12.545065 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 12:39:12.545276 kubelet[3269]: I0430 12:39:12.545168 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.545276 kubelet[3269]: I0430 12:39:12.545214 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 12:39:12.548621 kubelet[3269]: I0430 12:39:12.548533 3269 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-kube-api-access-sfg5l" (OuterVolumeSpecName: "kube-api-access-sfg5l") pod "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" (UID: "afafe276-0d2d-47a5-b5b2-3cb901cf3f6b"). InnerVolumeSpecName "kube-api-access-sfg5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 12:39:12.637300 kubelet[3269]: I0430 12:39:12.635756 3269 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sfg5l\" (UniqueName: \"kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-kube-api-access-sfg5l\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637300 kubelet[3269]: I0430 12:39:12.635803 3269 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-xtables-lock\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637300 kubelet[3269]: I0430 12:39:12.635826 3269 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-hostproc\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637300 kubelet[3269]: I0430 12:39:12.635846 3269 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-host-proc-sys-kernel\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637300 kubelet[3269]: I0430 12:39:12.635876 3269 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cni-path\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637300 kubelet[3269]: I0430 12:39:12.635894 3269 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-hubble-tls\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637300 kubelet[3269]: I0430 12:39:12.635914 3269 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-host-proc-sys-net\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637300 kubelet[3269]: I0430 12:39:12.635933 3269 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-bpf-maps\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637810 kubelet[3269]: I0430 12:39:12.635953 3269 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-cilium-config-path\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637810 kubelet[3269]: I0430 12:39:12.636002 3269 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-clustermesh-secrets\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.637810 kubelet[3269]: I0430 12:39:12.636026 3269 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b-lib-modules\") on node \"ip-172-31-17-143\" DevicePath \"\"" Apr 30 12:39:12.793013 systemd[1]: Removed slice kubepods-besteffort-podceca9ad6_3b09_4089_a90c_abf0268f349e.slice - libcontainer container kubepods-besteffort-podceca9ad6_3b09_4089_a90c_abf0268f349e.slice. Apr 30 12:39:12.796689 systemd[1]: Removed slice kubepods-burstable-podafafe276_0d2d_47a5_b5b2_3cb901cf3f6b.slice - libcontainer container kubepods-burstable-podafafe276_0d2d_47a5_b5b2_3cb901cf3f6b.slice. Apr 30 12:39:12.796925 systemd[1]: kubepods-burstable-podafafe276_0d2d_47a5_b5b2_3cb901cf3f6b.slice: Consumed 14.771s CPU time, 126.9M memory peak, 136K read from disk, 12.9M written to disk. Apr 30 12:39:13.111413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa-rootfs.mount: Deactivated successfully. Apr 30 12:39:13.111635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa-shm.mount: Deactivated successfully. Apr 30 12:39:13.111780 systemd[1]: var-lib-kubelet-pods-afafe276\x2d0d2d\x2d47a5\x2db5b2\x2d3cb901cf3f6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsfg5l.mount: Deactivated successfully. Apr 30 12:39:13.111916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6-rootfs.mount: Deactivated successfully. Apr 30 12:39:13.112612 systemd[1]: var-lib-kubelet-pods-ceca9ad6\x2d3b09\x2d4089\x2da90c\x2dabf0268f349e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jzhm.mount: Deactivated successfully. Apr 30 12:39:13.112866 systemd[1]: var-lib-kubelet-pods-afafe276\x2d0d2d\x2d47a5\x2db5b2\x2d3cb901cf3f6b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 12:39:13.113051 systemd[1]: var-lib-kubelet-pods-afafe276\x2d0d2d\x2d47a5\x2db5b2\x2d3cb901cf3f6b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 12:39:13.235585 kubelet[3269]: I0430 12:39:13.235308 3269 scope.go:117] "RemoveContainer" containerID="b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59" Apr 30 12:39:13.244138 containerd[1942]: time="2025-04-30T12:39:13.243437819Z" level=info msg="RemoveContainer for \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\"" Apr 30 12:39:13.259206 containerd[1942]: time="2025-04-30T12:39:13.259156499Z" level=info msg="RemoveContainer for \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\" returns successfully" Apr 30 12:39:13.259844 kubelet[3269]: I0430 12:39:13.259758 3269 scope.go:117] "RemoveContainer" containerID="b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59" Apr 30 12:39:13.260661 containerd[1942]: time="2025-04-30T12:39:13.260514467Z" level=error msg="ContainerStatus for \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\": not found" Apr 30 12:39:13.260996 kubelet[3269]: E0430 12:39:13.260918 3269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\": not found" containerID="b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59" Apr 30 12:39:13.261219 kubelet[3269]: I0430 12:39:13.261015 3269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59"} err="failed to get container status \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\": rpc error: code = NotFound desc = an error occurred when try to find container \"b53b8ed1c9b0eb41714580d610672fe469670dbbb47b93ea3b474236aae25a59\": not found" Apr 30 12:39:13.261292 kubelet[3269]: I0430 12:39:13.261217 3269 scope.go:117] "RemoveContainer" containerID="4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25" Apr 30 12:39:13.265786 containerd[1942]: time="2025-04-30T12:39:13.265719347Z" level=info msg="RemoveContainer for \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\"" Apr 30 12:39:13.277247 containerd[1942]: time="2025-04-30T12:39:13.277070603Z" level=info msg="RemoveContainer for \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\" returns successfully" Apr 30 12:39:13.277652 kubelet[3269]: I0430 12:39:13.277577 3269 scope.go:117] "RemoveContainer" containerID="d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c" Apr 30 12:39:13.282682 containerd[1942]: time="2025-04-30T12:39:13.282058103Z" level=info msg="RemoveContainer for \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\"" Apr 30 12:39:13.291108 containerd[1942]: time="2025-04-30T12:39:13.290881427Z" level=info msg="RemoveContainer for \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\" returns successfully" Apr 30 12:39:13.293794 kubelet[3269]: I0430 12:39:13.293757 3269 scope.go:117] "RemoveContainer" containerID="0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387" Apr 30 12:39:13.298341 containerd[1942]: time="2025-04-30T12:39:13.298292975Z" level=info msg="RemoveContainer for \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\"" Apr 30 12:39:13.305200 containerd[1942]: time="2025-04-30T12:39:13.305149595Z" level=info msg="RemoveContainer for \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\" returns successfully" Apr 30 12:39:13.305991 kubelet[3269]: I0430 12:39:13.305937 3269 scope.go:117] "RemoveContainer" containerID="a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05" Apr 30 12:39:13.308626 containerd[1942]: time="2025-04-30T12:39:13.308248991Z" level=info msg="RemoveContainer for \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\"" Apr 30 12:39:13.314322 containerd[1942]: time="2025-04-30T12:39:13.314272823Z" level=info msg="RemoveContainer for \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\" returns successfully" Apr 30 12:39:13.314925 kubelet[3269]: I0430 12:39:13.314785 3269 scope.go:117] "RemoveContainer" containerID="b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891" Apr 30 12:39:13.317007 containerd[1942]: time="2025-04-30T12:39:13.316859435Z" level=info msg="RemoveContainer for \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\"" Apr 30 12:39:13.325037 containerd[1942]: time="2025-04-30T12:39:13.322995755Z" level=info msg="RemoveContainer for \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\" returns successfully" Apr 30 12:39:13.325037 containerd[1942]: time="2025-04-30T12:39:13.323633231Z" level=error msg="ContainerStatus for \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\": not found" Apr 30 12:39:13.325037 containerd[1942]: time="2025-04-30T12:39:13.324428723Z" level=error msg="ContainerStatus for \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\": not found" Apr 30 12:39:13.325037 containerd[1942]: time="2025-04-30T12:39:13.324958067Z" level=error msg="ContainerStatus for \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\": not found" Apr 30 12:39:13.325352 kubelet[3269]: I0430 12:39:13.323291 3269 scope.go:117] "RemoveContainer" containerID="4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25" Apr 30 12:39:13.325352 kubelet[3269]: E0430 12:39:13.323881 3269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\": not found" containerID="4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25" Apr 30 12:39:13.325352 kubelet[3269]: I0430 12:39:13.323960 3269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25"} err="failed to get container status \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a1972a1f10c5bd17b99879e0b06aa3997193f4af664b523e3eb03c38038cc25\": not found" Apr 30 12:39:13.325352 kubelet[3269]: I0430 12:39:13.324027 3269 scope.go:117] "RemoveContainer" containerID="d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c" Apr 30 12:39:13.325352 kubelet[3269]: E0430 12:39:13.324663 3269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\": not found" containerID="d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c" Apr 30 12:39:13.325352 kubelet[3269]: I0430 12:39:13.324700 3269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c"} err="failed to get container status \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3a740e0e116ec3812b0996374c9430244c5b0fddaf1fd74d5b5e14eca6a638c\": not found" Apr 30 12:39:13.325352 kubelet[3269]: I0430 12:39:13.324731 3269 scope.go:117] "RemoveContainer" containerID="0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387" Apr 30 12:39:13.325728 kubelet[3269]: E0430 12:39:13.325247 3269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\": not found" containerID="0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387" Apr 30 12:39:13.325728 kubelet[3269]: I0430 12:39:13.325287 3269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387"} err="failed to get container status \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cecfd51210646a60ac7c36ebda0f16117e197f7e2b0b25e1a7276a8ef32a387\": not found" Apr 30 12:39:13.325728 kubelet[3269]: I0430 12:39:13.325318 3269 scope.go:117] "RemoveContainer" containerID="a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05" Apr 30 12:39:13.325873 containerd[1942]: time="2025-04-30T12:39:13.325583363Z" level=error msg="ContainerStatus for \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\": not found" Apr 30 12:39:13.325927 kubelet[3269]: E0430 12:39:13.325780 3269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\": not found" containerID="a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05" Apr 30 12:39:13.325927 kubelet[3269]: I0430 12:39:13.325815 3269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05"} err="failed to get container status \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0ed486f961fce471e47030337b1719ad5cd1d0b9392ee80563d9d415a313f05\": not found" Apr 30 12:39:13.325927 kubelet[3269]: I0430 12:39:13.325885 3269 scope.go:117] "RemoveContainer" containerID="b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891" Apr 30 12:39:13.326250 containerd[1942]: time="2025-04-30T12:39:13.326191487Z" level=error msg="ContainerStatus for \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\": not found" Apr 30 12:39:13.326604 kubelet[3269]: E0430 12:39:13.326436 3269 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\": not found" containerID="b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891" Apr 30 12:39:13.326604 kubelet[3269]: I0430 12:39:13.326479 3269 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891"} err="failed to get container status \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\": rpc error: code = NotFound desc = an error occurred when try to find container \"b62fb84079c59dd2eb0d657c390aa80c2ae49b21ab08305d399d46f5a5994891\": not found" Apr 30 12:39:14.045923 sshd[5182]: Connection closed by 139.178.89.65 port 52930 Apr 30 12:39:14.046891 sshd-session[5180]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:14.054304 systemd[1]: sshd@25-172.31.17.143:22-139.178.89.65:52930.service: Deactivated successfully. Apr 30 12:39:14.060370 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 12:39:14.061485 systemd[1]: session-26.scope: Consumed 2.587s CPU time, 25.6M memory peak. Apr 30 12:39:14.062744 systemd-logind[1925]: Session 26 logged out. Waiting for processes to exit. Apr 30 12:39:14.064590 systemd-logind[1925]: Removed session 26. Apr 30 12:39:14.102483 systemd[1]: Started sshd@26-172.31.17.143:22-139.178.89.65:52942.service - OpenSSH per-connection server daemon (139.178.89.65:52942). Apr 30 12:39:14.383782 sshd[5339]: Accepted publickey for core from 139.178.89.65 port 52942 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:39:14.386600 sshd-session[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:14.395417 systemd-logind[1925]: New session 27 of user core. Apr 30 12:39:14.403253 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 12:39:14.783957 kubelet[3269]: I0430 12:39:14.783685 3269 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" path="/var/lib/kubelet/pods/afafe276-0d2d-47a5-b5b2-3cb901cf3f6b/volumes" Apr 30 12:39:14.786902 kubelet[3269]: I0430 12:39:14.786424 3269 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceca9ad6-3b09-4089-a90c-abf0268f349e" path="/var/lib/kubelet/pods/ceca9ad6-3b09-4089-a90c-abf0268f349e/volumes" Apr 30 12:39:14.977425 ntpd[1917]: Deleting interface #12 lxc_health, fe80::6416:19ff:fefc:f1ab%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Apr 30 12:39:14.978090 ntpd[1917]: 30 Apr 12:39:14 ntpd[1917]: Deleting interface #12 lxc_health, fe80::6416:19ff:fefc:f1ab%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Apr 30 12:39:16.705179 containerd[1942]: time="2025-04-30T12:39:16.705055756Z" level=info msg="StopPodSandbox for \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\"" Apr 30 12:39:16.707425 containerd[1942]: time="2025-04-30T12:39:16.707107456Z" level=info msg="TearDown network for sandbox \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\" successfully" Apr 30 12:39:16.707425 containerd[1942]: time="2025-04-30T12:39:16.707162368Z" level=info msg="StopPodSandbox for \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\" returns successfully" Apr 30 12:39:16.709340 containerd[1942]: time="2025-04-30T12:39:16.709286212Z" level=info msg="RemovePodSandbox for \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\"" Apr 30 12:39:16.710182 containerd[1942]: time="2025-04-30T12:39:16.709778992Z" level=info msg="Forcibly stopping sandbox \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\"" Apr 30 12:39:16.710182 containerd[1942]: time="2025-04-30T12:39:16.709939456Z" level=info msg="TearDown network for sandbox \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\" successfully" Apr 30 12:39:16.718804 containerd[1942]: time="2025-04-30T12:39:16.718489396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:39:16.718804 containerd[1942]: time="2025-04-30T12:39:16.718602688Z" level=info msg="RemovePodSandbox \"7c5d6530764ab9083f41337e7317714b0f8d6adbf5acd11d4c3c6f0a006db1f6\" returns successfully" Apr 30 12:39:16.720812 containerd[1942]: time="2025-04-30T12:39:16.719760988Z" level=info msg="StopPodSandbox for \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\"" Apr 30 12:39:16.720812 containerd[1942]: time="2025-04-30T12:39:16.719899492Z" level=info msg="TearDown network for sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" successfully" Apr 30 12:39:16.720812 containerd[1942]: time="2025-04-30T12:39:16.719921548Z" level=info msg="StopPodSandbox for \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" returns successfully" Apr 30 12:39:16.721659 containerd[1942]: time="2025-04-30T12:39:16.721243768Z" level=info msg="RemovePodSandbox for \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\"" Apr 30 12:39:16.721776 containerd[1942]: time="2025-04-30T12:39:16.721691164Z" level=info msg="Forcibly stopping sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\"" Apr 30 12:39:16.721954 containerd[1942]: time="2025-04-30T12:39:16.721907740Z" level=info msg="TearDown network for sandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" successfully" Apr 30 12:39:16.730615 containerd[1942]: time="2025-04-30T12:39:16.730489900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 12:39:16.730751 containerd[1942]: time="2025-04-30T12:39:16.730626916Z" level=info msg="RemovePodSandbox \"fadabd53ffb599b119e78004dbe33f759791b6de938ff2fac305a946c444c0fa\" returns successfully" Apr 30 12:39:16.736399 sshd[5341]: Connection closed by 139.178.89.65 port 52942 Apr 30 12:39:16.738380 sshd-session[5339]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:16.752942 systemd[1]: sshd@26-172.31.17.143:22-139.178.89.65:52942.service: Deactivated successfully. Apr 30 12:39:16.764178 kubelet[3269]: I0430 12:39:16.764109 3269 topology_manager.go:215] "Topology Admit Handler" podUID="9d1bac1f-f847-419a-8155-04d3a1e6d30b" podNamespace="kube-system" podName="cilium-xjw6p" Apr 30 12:39:16.764740 kubelet[3269]: E0430 12:39:16.764220 3269 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" containerName="mount-cgroup" Apr 30 12:39:16.764740 kubelet[3269]: E0430 12:39:16.764264 3269 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" containerName="apply-sysctl-overwrites" Apr 30 12:39:16.764740 kubelet[3269]: E0430 12:39:16.764284 3269 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" containerName="clean-cilium-state" Apr 30 12:39:16.764740 kubelet[3269]: E0430 12:39:16.764300 3269 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ceca9ad6-3b09-4089-a90c-abf0268f349e" containerName="cilium-operator" Apr 30 12:39:16.764740 kubelet[3269]: E0430 12:39:16.764314 3269 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" containerName="mount-bpf-fs" Apr 30 12:39:16.764740 kubelet[3269]: E0430 12:39:16.764353 3269 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" containerName="cilium-agent" Apr 30 12:39:16.764740 kubelet[3269]: I0430 12:39:16.764400 3269 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceca9ad6-3b09-4089-a90c-abf0268f349e" containerName="cilium-operator" Apr 30 12:39:16.764740 kubelet[3269]: I0430 12:39:16.764439 3269 memory_manager.go:354] "RemoveStaleState removing state" podUID="afafe276-0d2d-47a5-b5b2-3cb901cf3f6b" containerName="cilium-agent" Apr 30 12:39:16.766834 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 12:39:16.770502 systemd[1]: session-27.scope: Consumed 2.053s CPU time, 23.7M memory peak. Apr 30 12:39:16.773293 systemd-logind[1925]: Session 27 logged out. Waiting for processes to exit. Apr 30 12:39:16.807333 systemd-logind[1925]: Removed session 27. Apr 30 12:39:16.825365 systemd[1]: Started sshd@27-172.31.17.143:22-139.178.89.65:35320.service - OpenSSH per-connection server daemon (139.178.89.65:35320). Apr 30 12:39:16.840882 systemd[1]: Created slice kubepods-burstable-pod9d1bac1f_f847_419a_8155_04d3a1e6d30b.slice - libcontainer container kubepods-burstable-pod9d1bac1f_f847_419a_8155_04d3a1e6d30b.slice. Apr 30 12:39:16.857531 kubelet[3269]: I0430 12:39:16.857364 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-hostproc\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857531 kubelet[3269]: I0430 12:39:16.857432 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-cilium-cgroup\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857531 kubelet[3269]: I0430 12:39:16.857469 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-etc-cni-netd\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857531 kubelet[3269]: I0430 12:39:16.857504 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-lib-modules\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857531 kubelet[3269]: I0430 12:39:16.857537 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d1bac1f-f847-419a-8155-04d3a1e6d30b-clustermesh-secrets\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857874 kubelet[3269]: I0430 12:39:16.857572 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-host-proc-sys-net\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857874 kubelet[3269]: I0430 12:39:16.857611 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-cilium-run\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857874 kubelet[3269]: I0430 12:39:16.857684 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-host-proc-sys-kernel\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857874 kubelet[3269]: I0430 12:39:16.857720 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-bpf-maps\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857874 kubelet[3269]: I0430 12:39:16.857753 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-cni-path\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.857874 kubelet[3269]: I0430 12:39:16.857787 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d1bac1f-f847-419a-8155-04d3a1e6d30b-cilium-config-path\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.862337 kubelet[3269]: I0430 12:39:16.857825 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d1bac1f-f847-419a-8155-04d3a1e6d30b-xtables-lock\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.862337 kubelet[3269]: I0430 12:39:16.857859 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9d1bac1f-f847-419a-8155-04d3a1e6d30b-cilium-ipsec-secrets\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.862337 kubelet[3269]: I0430 12:39:16.857893 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d1bac1f-f847-419a-8155-04d3a1e6d30b-hubble-tls\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.862337 kubelet[3269]: I0430 12:39:16.857929 3269 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9jh\" (UniqueName: \"kubernetes.io/projected/9d1bac1f-f847-419a-8155-04d3a1e6d30b-kube-api-access-fw9jh\") pod \"cilium-xjw6p\" (UID: \"9d1bac1f-f847-419a-8155-04d3a1e6d30b\") " pod="kube-system/cilium-xjw6p" Apr 30 12:39:16.920734 kubelet[3269]: E0430 12:39:16.920687 3269 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 12:39:17.146797 sshd[5351]: Accepted publickey for core from 139.178.89.65 port 35320 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:39:17.150100 sshd-session[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:17.156238 containerd[1942]: time="2025-04-30T12:39:17.155828078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xjw6p,Uid:9d1bac1f-f847-419a-8155-04d3a1e6d30b,Namespace:kube-system,Attempt:0,}" Apr 30 12:39:17.159623 systemd-logind[1925]: New session 28 of user core. Apr 30 12:39:17.167282 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 12:39:17.204357 containerd[1942]: time="2025-04-30T12:39:17.203928147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 12:39:17.204357 containerd[1942]: time="2025-04-30T12:39:17.204078063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 12:39:17.204523 containerd[1942]: time="2025-04-30T12:39:17.204130791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:39:17.204811 containerd[1942]: time="2025-04-30T12:39:17.204683679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 12:39:17.239601 systemd[1]: Started cri-containerd-afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9.scope - libcontainer container afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9. Apr 30 12:39:17.284131 containerd[1942]: time="2025-04-30T12:39:17.283944339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xjw6p,Uid:9d1bac1f-f847-419a-8155-04d3a1e6d30b,Namespace:kube-system,Attempt:0,} returns sandbox id \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\"" Apr 30 12:39:17.290448 containerd[1942]: time="2025-04-30T12:39:17.290380803Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 12:39:17.318862 containerd[1942]: time="2025-04-30T12:39:17.318695139Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d0e5ecfb0a14f6ea4c9f1be51bc92a7d383be3293460c57876339a03237b42d1\"" Apr 30 12:39:17.322672 containerd[1942]: time="2025-04-30T12:39:17.322378275Z" level=info msg="StartContainer for \"d0e5ecfb0a14f6ea4c9f1be51bc92a7d383be3293460c57876339a03237b42d1\"" Apr 30 12:39:17.341560 sshd[5360]: Connection closed by 139.178.89.65 port 35320 Apr 30 12:39:17.343294 sshd-session[5351]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:17.354196 systemd[1]: sshd@27-172.31.17.143:22-139.178.89.65:35320.service: Deactivated successfully. Apr 30 12:39:17.361800 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 12:39:17.365833 systemd-logind[1925]: Session 28 logged out. Waiting for processes to exit. Apr 30 12:39:17.371687 systemd-logind[1925]: Removed session 28. Apr 30 12:39:17.402260 systemd[1]: Started cri-containerd-d0e5ecfb0a14f6ea4c9f1be51bc92a7d383be3293460c57876339a03237b42d1.scope - libcontainer container d0e5ecfb0a14f6ea4c9f1be51bc92a7d383be3293460c57876339a03237b42d1. Apr 30 12:39:17.409603 systemd[1]: Started sshd@28-172.31.17.143:22-139.178.89.65:35326.service - OpenSSH per-connection server daemon (139.178.89.65:35326). Apr 30 12:39:17.483019 containerd[1942]: time="2025-04-30T12:39:17.480403084Z" level=info msg="StartContainer for \"d0e5ecfb0a14f6ea4c9f1be51bc92a7d383be3293460c57876339a03237b42d1\" returns successfully" Apr 30 12:39:17.498788 systemd[1]: cri-containerd-d0e5ecfb0a14f6ea4c9f1be51bc92a7d383be3293460c57876339a03237b42d1.scope: Deactivated successfully. Apr 30 12:39:17.556727 containerd[1942]: time="2025-04-30T12:39:17.556531948Z" level=info msg="shim disconnected" id=d0e5ecfb0a14f6ea4c9f1be51bc92a7d383be3293460c57876339a03237b42d1 namespace=k8s.io Apr 30 12:39:17.557153 containerd[1942]: time="2025-04-30T12:39:17.557104900Z" level=warning msg="cleaning up after shim disconnected" id=d0e5ecfb0a14f6ea4c9f1be51bc92a7d383be3293460c57876339a03237b42d1 namespace=k8s.io Apr 30 12:39:17.557286 containerd[1942]: time="2025-04-30T12:39:17.557259928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:17.581337 containerd[1942]: time="2025-04-30T12:39:17.581279164Z" level=warning msg="cleanup warnings time=\"2025-04-30T12:39:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 12:39:17.724197 sshd[5425]: Accepted publickey for core from 139.178.89.65 port 35326 ssh2: RSA SHA256:B8wrLU/D77hP1E74WVx6wQCV0bZ1v6SD1kOX6G+S5R0 Apr 30 12:39:17.728320 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 12:39:17.739106 systemd-logind[1925]: New session 29 of user core. Apr 30 12:39:17.745246 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 12:39:18.277332 containerd[1942]: time="2025-04-30T12:39:18.277242880Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 12:39:18.307552 containerd[1942]: time="2025-04-30T12:39:18.306910648Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8\"" Apr 30 12:39:18.310033 containerd[1942]: time="2025-04-30T12:39:18.309445372Z" level=info msg="StartContainer for \"b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8\"" Apr 30 12:39:18.371177 systemd[1]: Started cri-containerd-b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8.scope - libcontainer container b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8. Apr 30 12:39:18.419598 containerd[1942]: time="2025-04-30T12:39:18.419523881Z" level=info msg="StartContainer for \"b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8\" returns successfully" Apr 30 12:39:18.433316 systemd[1]: cri-containerd-b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8.scope: Deactivated successfully. Apr 30 12:39:18.481638 containerd[1942]: time="2025-04-30T12:39:18.481514825Z" level=info msg="shim disconnected" id=b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8 namespace=k8s.io Apr 30 12:39:18.481638 containerd[1942]: time="2025-04-30T12:39:18.481587401Z" level=warning msg="cleaning up after shim disconnected" id=b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8 namespace=k8s.io Apr 30 12:39:18.481638 containerd[1942]: time="2025-04-30T12:39:18.481608953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:18.975223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4565fb3dd7b8f05c5b7bc887d2a74938ea873b33fd8ce370c716d88377620f8-rootfs.mount: Deactivated successfully. Apr 30 12:39:19.287435 containerd[1942]: time="2025-04-30T12:39:19.287244329Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 12:39:19.322599 containerd[1942]: time="2025-04-30T12:39:19.322528421Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be\"" Apr 30 12:39:19.324518 containerd[1942]: time="2025-04-30T12:39:19.323399117Z" level=info msg="StartContainer for \"85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be\"" Apr 30 12:39:19.381461 systemd[1]: Started cri-containerd-85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be.scope - libcontainer container 85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be. Apr 30 12:39:19.439674 containerd[1942]: time="2025-04-30T12:39:19.439209210Z" level=info msg="StartContainer for \"85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be\" returns successfully" Apr 30 12:39:19.443829 systemd[1]: cri-containerd-85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be.scope: Deactivated successfully. Apr 30 12:39:19.489023 containerd[1942]: time="2025-04-30T12:39:19.488762970Z" level=info msg="shim disconnected" id=85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be namespace=k8s.io Apr 30 12:39:19.489023 containerd[1942]: time="2025-04-30T12:39:19.488836830Z" level=warning msg="cleaning up after shim disconnected" id=85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be namespace=k8s.io Apr 30 12:39:19.489023 containerd[1942]: time="2025-04-30T12:39:19.488856570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:19.631483 kubelet[3269]: I0430 12:39:19.631094 3269 setters.go:580] "Node became not ready" node="ip-172-31-17-143" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T12:39:19Z","lastTransitionTime":"2025-04-30T12:39:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 12:39:19.974279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85d31eabc5a237623c8f0618efed38e2e78dbc1303b47ecb0ba59deb2b4123be-rootfs.mount: Deactivated successfully. Apr 30 12:39:20.288085 containerd[1942]: time="2025-04-30T12:39:20.287521902Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 12:39:20.328114 containerd[1942]: time="2025-04-30T12:39:20.324344874Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27\"" Apr 30 12:39:20.328114 containerd[1942]: time="2025-04-30T12:39:20.325665198Z" level=info msg="StartContainer for \"8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27\"" Apr 30 12:39:20.394288 systemd[1]: Started cri-containerd-8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27.scope - libcontainer container 8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27. Apr 30 12:39:20.441067 systemd[1]: cri-containerd-8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27.scope: Deactivated successfully. Apr 30 12:39:20.448107 containerd[1942]: time="2025-04-30T12:39:20.448047151Z" level=info msg="StartContainer for \"8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27\" returns successfully" Apr 30 12:39:20.495463 containerd[1942]: time="2025-04-30T12:39:20.495219811Z" level=info msg="shim disconnected" id=8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27 namespace=k8s.io Apr 30 12:39:20.495463 containerd[1942]: time="2025-04-30T12:39:20.495344719Z" level=warning msg="cleaning up after shim disconnected" id=8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27 namespace=k8s.io Apr 30 12:39:20.495463 containerd[1942]: time="2025-04-30T12:39:20.495388543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:20.975254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8823f6aa7c7003b202e060b02d93e703d29e1c0b93c9dbb3bb38803bd20ced27-rootfs.mount: Deactivated successfully. Apr 30 12:39:21.297037 containerd[1942]: time="2025-04-30T12:39:21.296843455Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 12:39:21.331626 containerd[1942]: time="2025-04-30T12:39:21.331434547Z" level=info msg="CreateContainer within sandbox \"afb2984473dba0e977e91acc250f816412bb2f092658831ede7257473f538ce9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0d1e241830956a4c0a17ff7f02d729c5586ec67b4edcd54cc6e54dc131ca55b\"" Apr 30 12:39:21.333671 containerd[1942]: time="2025-04-30T12:39:21.332175475Z" level=info msg="StartContainer for \"d0d1e241830956a4c0a17ff7f02d729c5586ec67b4edcd54cc6e54dc131ca55b\"" Apr 30 12:39:21.383292 systemd[1]: Started cri-containerd-d0d1e241830956a4c0a17ff7f02d729c5586ec67b4edcd54cc6e54dc131ca55b.scope - libcontainer container d0d1e241830956a4c0a17ff7f02d729c5586ec67b4edcd54cc6e54dc131ca55b. Apr 30 12:39:21.447788 containerd[1942]: time="2025-04-30T12:39:21.447667220Z" level=info msg="StartContainer for \"d0d1e241830956a4c0a17ff7f02d729c5586ec67b4edcd54cc6e54dc131ca55b\" returns successfully" Apr 30 12:39:22.237043 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 12:39:24.464165 systemd[1]: run-containerd-runc-k8s.io-d0d1e241830956a4c0a17ff7f02d729c5586ec67b4edcd54cc6e54dc131ca55b-runc.1Ub8fz.mount: Deactivated successfully. Apr 30 12:39:26.395316 systemd-networkd[1855]: lxc_health: Link UP Apr 30 12:39:26.406933 (udev-worker)[6188]: Network interface NamePolicy= disabled on kernel command line. Apr 30 12:39:26.410148 systemd-networkd[1855]: lxc_health: Gained carrier Apr 30 12:39:27.196985 kubelet[3269]: I0430 12:39:27.195714 3269 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xjw6p" podStartSLOduration=11.19569288 podStartE2EDuration="11.19569288s" podCreationTimestamp="2025-04-30 12:39:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 12:39:22.341253548 +0000 UTC m=+125.862700838" watchObservedRunningTime="2025-04-30 12:39:27.19569288 +0000 UTC m=+130.717140158" Apr 30 12:39:28.017229 systemd-networkd[1855]: lxc_health: Gained IPv6LL Apr 30 12:39:29.246181 systemd[1]: run-containerd-runc-k8s.io-d0d1e241830956a4c0a17ff7f02d729c5586ec67b4edcd54cc6e54dc131ca55b-runc.FgQuyX.mount: Deactivated successfully. Apr 30 12:39:30.977511 ntpd[1917]: Listen normally on 15 lxc_health [fe80::d8f6:9ff:feaa:3804%14]:123 Apr 30 12:39:30.978827 ntpd[1917]: 30 Apr 12:39:30 ntpd[1917]: Listen normally on 15 lxc_health [fe80::d8f6:9ff:feaa:3804%14]:123 Apr 30 12:39:31.810423 sshd[5471]: Connection closed by 139.178.89.65 port 35326 Apr 30 12:39:31.812536 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Apr 30 12:39:31.820619 systemd[1]: sshd@28-172.31.17.143:22-139.178.89.65:35326.service: Deactivated successfully. Apr 30 12:39:31.827319 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 12:39:31.833482 systemd-logind[1925]: Session 29 logged out. Waiting for processes to exit. Apr 30 12:39:31.837259 systemd-logind[1925]: Removed session 29. Apr 30 12:39:45.798825 systemd[1]: cri-containerd-6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a.scope: Deactivated successfully. Apr 30 12:39:45.799440 systemd[1]: cri-containerd-6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a.scope: Consumed 4.822s CPU time, 57.6M memory peak. Apr 30 12:39:45.840143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a-rootfs.mount: Deactivated successfully. Apr 30 12:39:45.860682 containerd[1942]: time="2025-04-30T12:39:45.860605689Z" level=info msg="shim disconnected" id=6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a namespace=k8s.io Apr 30 12:39:45.861716 containerd[1942]: time="2025-04-30T12:39:45.860877633Z" level=warning msg="cleaning up after shim disconnected" id=6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a namespace=k8s.io Apr 30 12:39:45.861716 containerd[1942]: time="2025-04-30T12:39:45.860902461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:46.379603 kubelet[3269]: I0430 12:39:46.378671 3269 scope.go:117] "RemoveContainer" containerID="6a773cb00ca946a547456a12afad859624ba0ed9ee9c69e468678411f4db757a" Apr 30 12:39:46.383413 containerd[1942]: time="2025-04-30T12:39:46.383148847Z" level=info msg="CreateContainer within sandbox \"1673d135f108ff4defc9963beba03560eced833769a40859200f68a6389d0fe9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 12:39:46.408998 containerd[1942]: time="2025-04-30T12:39:46.408800228Z" level=info msg="CreateContainer within sandbox \"1673d135f108ff4defc9963beba03560eced833769a40859200f68a6389d0fe9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c7fb9525b8b987b64a7fad25335ca153fe41ce4cebd1b8dc90d53daa34da51db\"" Apr 30 12:39:46.409707 containerd[1942]: time="2025-04-30T12:39:46.409669076Z" level=info msg="StartContainer for \"c7fb9525b8b987b64a7fad25335ca153fe41ce4cebd1b8dc90d53daa34da51db\"" Apr 30 12:39:46.466258 systemd[1]: Started cri-containerd-c7fb9525b8b987b64a7fad25335ca153fe41ce4cebd1b8dc90d53daa34da51db.scope - libcontainer container c7fb9525b8b987b64a7fad25335ca153fe41ce4cebd1b8dc90d53daa34da51db. Apr 30 12:39:46.536531 containerd[1942]: time="2025-04-30T12:39:46.536460752Z" level=info msg="StartContainer for \"c7fb9525b8b987b64a7fad25335ca153fe41ce4cebd1b8dc90d53daa34da51db\" returns successfully" Apr 30 12:39:49.407612 kubelet[3269]: E0430 12:39:49.407319 3269 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-143?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 12:39:50.639659 systemd[1]: cri-containerd-374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36.scope: Deactivated successfully. Apr 30 12:39:50.641327 systemd[1]: cri-containerd-374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36.scope: Consumed 3.254s CPU time, 21.7M memory peak. Apr 30 12:39:50.679532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36-rootfs.mount: Deactivated successfully. Apr 30 12:39:50.695054 containerd[1942]: time="2025-04-30T12:39:50.694902769Z" level=info msg="shim disconnected" id=374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36 namespace=k8s.io Apr 30 12:39:50.695054 containerd[1942]: time="2025-04-30T12:39:50.695050237Z" level=warning msg="cleaning up after shim disconnected" id=374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36 namespace=k8s.io Apr 30 12:39:50.695955 containerd[1942]: time="2025-04-30T12:39:50.695112217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 12:39:51.401524 kubelet[3269]: I0430 12:39:51.401456 3269 scope.go:117] "RemoveContainer" containerID="374cae1227392b5568526bdb9cc90453ad60205cb5c8d4ab62e12565c6a25d36" Apr 30 12:39:51.405296 containerd[1942]: time="2025-04-30T12:39:51.405223140Z" level=info msg="CreateContainer within sandbox \"f924e0e229b30b17ae957347916ea619c0ec1894a9283cca1975ba8a2d8bcdbb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 12:39:51.433399 containerd[1942]: time="2025-04-30T12:39:51.433341541Z" level=info msg="CreateContainer within sandbox \"f924e0e229b30b17ae957347916ea619c0ec1894a9283cca1975ba8a2d8bcdbb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"115903af26ed667dfdd790538444263c72992163823ef2a029b1301342c09d11\"" Apr 30 12:39:51.434402 containerd[1942]: time="2025-04-30T12:39:51.434358469Z" level=info msg="StartContainer for \"115903af26ed667dfdd790538444263c72992163823ef2a029b1301342c09d11\"" Apr 30 12:39:51.495305 systemd[1]: Started cri-containerd-115903af26ed667dfdd790538444263c72992163823ef2a029b1301342c09d11.scope - libcontainer container 115903af26ed667dfdd790538444263c72992163823ef2a029b1301342c09d11. Apr 30 12:39:51.563469 containerd[1942]: time="2025-04-30T12:39:51.563226685Z" level=info msg="StartContainer for \"115903af26ed667dfdd790538444263c72992163823ef2a029b1301342c09d11\" returns successfully" Apr 30 12:39:59.408709 kubelet[3269]: E0430 12:39:59.408360 3269 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-143?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"