Apr 30 00:43:34.241262 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 30 00:43:34.241314 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:43:34.241340 kernel: KASLR disabled due to lack of seed Apr 30 00:43:34.241357 kernel: efi: EFI v2.7 by EDK II Apr 30 00:43:34.241373 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Apr 30 00:43:34.241389 kernel: ACPI: Early table checksum verification disabled Apr 30 00:43:34.241407 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 30 00:43:34.241423 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 00:43:34.241439 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 00:43:34.241454 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Apr 30 00:43:34.241475 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 00:43:34.241491 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 30 00:43:34.241507 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 30 00:43:34.241523 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 30 00:43:34.241542 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 00:43:34.241564 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 30 00:43:34.241582 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 30 00:43:34.241598 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 30 00:43:34.241615 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 30 00:43:34.241631 kernel: printk: bootconsole [uart0] enabled Apr 30 00:43:34.241648 kernel: NUMA: Failed to initialise from firmware Apr 30 00:43:34.241665 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 00:43:34.241682 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 30 00:43:34.241698 kernel: Zone ranges: Apr 30 00:43:34.241715 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 00:43:34.241731 kernel: DMA32 empty Apr 30 00:43:34.241752 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 30 00:43:34.241769 kernel: Movable zone start for each node Apr 30 00:43:34.241786 kernel: Early memory node ranges Apr 30 00:43:34.241802 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 30 00:43:34.241819 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 30 00:43:34.241835 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 30 00:43:34.241852 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 30 00:43:34.241869 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 30 00:43:34.241885 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 30 00:43:34.241901 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 30 00:43:34.241918 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 30 00:43:34.241935 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 00:43:34.241956 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 30 00:43:34.241974 kernel: psci: probing for conduit method from ACPI. Apr 30 00:43:34.241997 kernel: psci: PSCIv1.0 detected in firmware. Apr 30 00:43:34.242015 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:43:34.242032 kernel: psci: Trusted OS migration not required Apr 30 00:43:34.242054 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:43:34.242098 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:43:34.242120 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:43:34.242147 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:43:34.242165 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:43:34.242183 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:43:34.242201 kernel: CPU features: detected: Spectre-v2 Apr 30 00:43:34.242218 kernel: CPU features: detected: Spectre-v3a Apr 30 00:43:34.242236 kernel: CPU features: detected: Spectre-BHB Apr 30 00:43:34.242253 kernel: CPU features: detected: ARM erratum 1742098 Apr 30 00:43:34.242270 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 30 00:43:34.242295 kernel: alternatives: applying boot alternatives Apr 30 00:43:34.242316 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:43:34.242335 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:43:34.242352 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:43:34.242370 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:43:34.242387 kernel: Fallback order for Node 0: 0 Apr 30 00:43:34.242405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 30 00:43:34.242422 kernel: Policy zone: Normal Apr 30 00:43:34.242439 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:43:34.242456 kernel: software IO TLB: area num 2. Apr 30 00:43:34.242474 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 30 00:43:34.242496 kernel: Memory: 3820152K/4030464K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210312K reserved, 0K cma-reserved) Apr 30 00:43:34.242514 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:43:34.242531 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:43:34.242550 kernel: rcu: RCU event tracing is enabled. Apr 30 00:43:34.242567 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:43:34.242585 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:43:34.242603 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:43:34.242620 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:43:34.242638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:43:34.242655 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:43:34.242672 kernel: GICv3: 96 SPIs implemented Apr 30 00:43:34.242693 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:43:34.242711 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:43:34.242728 kernel: GICv3: GICv3 features: 16 PPIs Apr 30 00:43:34.242745 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 30 00:43:34.242762 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 30 00:43:34.242780 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:43:34.242797 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:43:34.242815 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 30 00:43:34.242832 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 30 00:43:34.242849 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 30 00:43:34.242866 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:43:34.242883 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 30 00:43:34.242906 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 30 00:43:34.242923 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 30 00:43:34.242941 kernel: Console: colour dummy device 80x25 Apr 30 00:43:34.242959 kernel: printk: console [tty1] enabled Apr 30 00:43:34.242976 kernel: ACPI: Core revision 20230628 Apr 30 00:43:34.242994 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 30 00:43:34.243012 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:43:34.243030 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:43:34.243047 kernel: landlock: Up and running. Apr 30 00:43:34.243263 kernel: SELinux: Initializing. Apr 30 00:43:34.243285 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:43:34.243304 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:43:34.243322 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:43:34.243341 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:43:34.243359 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:43:34.243378 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:43:34.243396 kernel: Platform MSI: ITS@0x10080000 domain created Apr 30 00:43:34.243413 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 30 00:43:34.243437 kernel: Remapping and enabling EFI services. Apr 30 00:43:34.243456 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:43:34.243473 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:43:34.243491 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 30 00:43:34.243509 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 30 00:43:34.243526 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 30 00:43:34.243544 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:43:34.243561 kernel: SMP: Total of 2 processors activated. Apr 30 00:43:34.243579 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:43:34.243601 kernel: CPU features: detected: 32-bit EL1 Support Apr 30 00:43:34.243619 kernel: CPU features: detected: CRC32 instructions Apr 30 00:43:34.243637 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:43:34.243665 kernel: alternatives: applying system-wide alternatives Apr 30 00:43:34.243687 kernel: devtmpfs: initialized Apr 30 00:43:34.243706 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:43:34.243724 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:43:34.243760 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:43:34.243783 kernel: SMBIOS 3.0.0 present. Apr 30 00:43:34.243803 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 30 00:43:34.243828 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:43:34.243847 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:43:34.243866 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:43:34.243885 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:43:34.243903 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:43:34.243922 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Apr 30 00:43:34.243940 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:43:34.243963 kernel: cpuidle: using governor menu Apr 30 00:43:34.243981 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:43:34.244000 kernel: ASID allocator initialised with 65536 entries Apr 30 00:43:34.244018 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:43:34.244037 kernel: Serial: AMBA PL011 UART driver Apr 30 00:43:34.244055 kernel: Modules: 17504 pages in range for non-PLT usage Apr 30 00:43:34.246834 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:43:34.246870 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:43:34.246890 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:43:34.246923 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:43:34.246942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:43:34.246961 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:43:34.246979 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:43:34.246998 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:43:34.247017 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:43:34.247036 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:43:34.247055 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:43:34.247099 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:43:34.247126 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:43:34.247145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:43:34.247163 kernel: ACPI: Interpreter enabled Apr 30 00:43:34.247182 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:43:34.247201 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:43:34.247219 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Apr 30 00:43:34.247555 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:43:34.247813 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:43:34.248059 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:43:34.248342 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Apr 30 00:43:34.248568 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Apr 30 00:43:34.248595 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 30 00:43:34.248614 kernel: acpiphp: Slot [1] registered Apr 30 00:43:34.248633 kernel: acpiphp: Slot [2] registered Apr 30 00:43:34.248652 kernel: acpiphp: Slot [3] registered Apr 30 00:43:34.248671 kernel: acpiphp: Slot [4] registered Apr 30 00:43:34.248698 kernel: acpiphp: Slot [5] registered Apr 30 00:43:34.248717 kernel: acpiphp: Slot [6] registered Apr 30 00:43:34.248736 kernel: acpiphp: Slot [7] registered Apr 30 00:43:34.248754 kernel: acpiphp: Slot [8] registered Apr 30 00:43:34.248773 kernel: acpiphp: Slot [9] registered Apr 30 00:43:34.248792 kernel: acpiphp: Slot [10] registered Apr 30 00:43:34.248811 kernel: acpiphp: Slot [11] registered Apr 30 00:43:34.248830 kernel: acpiphp: Slot [12] registered Apr 30 00:43:34.248848 kernel: acpiphp: Slot [13] registered Apr 30 00:43:34.248871 kernel: acpiphp: Slot [14] registered Apr 30 00:43:34.248891 kernel: acpiphp: Slot [15] registered Apr 30 00:43:34.248909 kernel: acpiphp: Slot [16] registered Apr 30 00:43:34.248928 kernel: acpiphp: Slot [17] registered Apr 30 00:43:34.248948 kernel: acpiphp: Slot [18] registered Apr 30 00:43:34.248966 kernel: acpiphp: Slot [19] registered Apr 30 00:43:34.248985 kernel: acpiphp: Slot [20] registered Apr 30 00:43:34.249004 kernel: acpiphp: Slot [21] registered Apr 30 00:43:34.249023 kernel: acpiphp: Slot [22] registered Apr 30 00:43:34.249042 kernel: acpiphp: Slot [23] registered Apr 30 00:43:34.249131 kernel: acpiphp: Slot [24] registered Apr 30 00:43:34.249156 kernel: acpiphp: Slot [25] registered Apr 30 00:43:34.249175 kernel: acpiphp: Slot [26] registered Apr 30 00:43:34.249194 kernel: acpiphp: Slot [27] registered Apr 30 00:43:34.249213 kernel: acpiphp: Slot [28] registered Apr 30 00:43:34.249231 kernel: acpiphp: Slot [29] registered Apr 30 00:43:34.249250 kernel: acpiphp: Slot [30] registered Apr 30 00:43:34.249270 kernel: acpiphp: Slot [31] registered Apr 30 00:43:34.249289 kernel: PCI host bridge to bus 0000:00 Apr 30 00:43:34.249545 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 30 00:43:34.249763 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:43:34.249971 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 30 00:43:34.252737 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Apr 30 00:43:34.253023 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 30 00:43:34.253419 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 30 00:43:34.253688 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 30 00:43:34.253976 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 00:43:34.254346 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 30 00:43:34.254592 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 00:43:34.254863 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 00:43:34.255148 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 30 00:43:34.255381 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 30 00:43:34.255616 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 30 00:43:34.255892 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 00:43:34.256197 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Apr 30 00:43:34.256448 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Apr 30 00:43:34.256707 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Apr 30 00:43:34.256968 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Apr 30 00:43:34.257821 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Apr 30 00:43:34.258150 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 30 00:43:34.258361 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:43:34.258570 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 30 00:43:34.258600 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:43:34.258621 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:43:34.258642 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:43:34.258663 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:43:34.258683 kernel: iommu: Default domain type: Translated Apr 30 00:43:34.258714 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:43:34.258734 kernel: efivars: Registered efivars operations Apr 30 00:43:34.258754 kernel: vgaarb: loaded Apr 30 00:43:34.258772 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:43:34.258791 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:43:34.258810 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:43:34.258829 kernel: pnp: PnP ACPI init Apr 30 00:43:34.261055 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 30 00:43:34.261207 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:43:34.261241 kernel: NET: Registered PF_INET protocol family Apr 30 00:43:34.261260 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:43:34.261280 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:43:34.261300 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:43:34.261319 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:43:34.261338 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:43:34.261357 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:43:34.261376 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:43:34.261396 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:43:34.261420 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:43:34.261438 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:43:34.261457 kernel: kvm [1]: HYP mode not available Apr 30 00:43:34.261476 kernel: Initialise system trusted keyrings Apr 30 00:43:34.261495 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:43:34.261514 kernel: Key type asymmetric registered Apr 30 00:43:34.261533 kernel: Asymmetric key parser 'x509' registered Apr 30 00:43:34.261551 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:43:34.261572 kernel: io scheduler mq-deadline registered Apr 30 00:43:34.261602 kernel: io scheduler kyber registered Apr 30 00:43:34.261621 kernel: io scheduler bfq registered Apr 30 00:43:34.261906 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 30 00:43:34.261940 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:43:34.261959 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:43:34.261979 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 30 00:43:34.261998 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 00:43:34.262017 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:43:34.262045 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 00:43:34.262340 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 30 00:43:34.262371 kernel: printk: console [ttyS0] disabled Apr 30 00:43:34.262391 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 30 00:43:34.262410 kernel: printk: console [ttyS0] enabled Apr 30 00:43:34.262428 kernel: printk: bootconsole [uart0] disabled Apr 30 00:43:34.262447 kernel: thunder_xcv, ver 1.0 Apr 30 00:43:34.262466 kernel: thunder_bgx, ver 1.0 Apr 30 00:43:34.262484 kernel: nicpf, ver 1.0 Apr 30 00:43:34.262509 kernel: nicvf, ver 1.0 Apr 30 00:43:34.262728 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:43:34.262945 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:43:33 UTC (1745973813) Apr 30 00:43:34.262973 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:43:34.262992 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 30 00:43:34.263011 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:43:34.263030 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:43:34.263048 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:43:34.263103 kernel: Segment Routing with IPv6 Apr 30 00:43:34.263124 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:43:34.263169 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:43:34.263190 kernel: Key type dns_resolver registered Apr 30 00:43:34.263209 kernel: registered taskstats version 1 Apr 30 00:43:34.263228 kernel: Loading compiled-in X.509 certificates Apr 30 00:43:34.263247 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:43:34.263265 kernel: Key type .fscrypt registered Apr 30 00:43:34.263283 kernel: Key type fscrypt-provisioning registered Apr 30 00:43:34.263308 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:43:34.263327 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:43:34.263346 kernel: ima: No architecture policies found Apr 30 00:43:34.263365 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:43:34.263383 kernel: clk: Disabling unused clocks Apr 30 00:43:34.263402 kernel: Freeing unused kernel memory: 39424K Apr 30 00:43:34.263421 kernel: Run /init as init process Apr 30 00:43:34.263440 kernel: with arguments: Apr 30 00:43:34.263458 kernel: /init Apr 30 00:43:34.263477 kernel: with environment: Apr 30 00:43:34.263500 kernel: HOME=/ Apr 30 00:43:34.263518 kernel: TERM=linux Apr 30 00:43:34.263537 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:43:34.263561 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:43:34.263585 systemd[1]: Detected virtualization amazon. Apr 30 00:43:34.263606 systemd[1]: Detected architecture arm64. Apr 30 00:43:34.263626 systemd[1]: Running in initrd. Apr 30 00:43:34.263651 systemd[1]: No hostname configured, using default hostname. Apr 30 00:43:34.263671 systemd[1]: Hostname set to . Apr 30 00:43:34.263692 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:43:34.263713 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:43:34.263733 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:34.263775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:34.263799 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:43:34.263822 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:43:34.263852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:43:34.263873 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:43:34.263898 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:43:34.263920 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:43:34.263942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:34.263964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:34.263984 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:43:34.264011 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:43:34.264032 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:43:34.264052 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:43:34.264108 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:43:34.264134 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:43:34.264156 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:43:34.264176 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:43:34.264198 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:34.264227 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:34.264249 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:34.264269 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:43:34.264290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:43:34.264310 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:43:34.264331 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:43:34.264351 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:43:34.264371 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:43:34.264391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:43:34.264418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:34.264438 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:43:34.264459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:34.264479 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:43:34.264501 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:43:34.264528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:34.264598 systemd-journald[251]: Collecting audit messages is disabled. Apr 30 00:43:34.266687 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:43:34.266721 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:34.266743 kernel: Bridge firewalling registered Apr 30 00:43:34.266764 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:34.266786 systemd-journald[251]: Journal started Apr 30 00:43:34.266825 systemd-journald[251]: Runtime Journal (/run/log/journal/ec297a5594ca0ed8114d1eca1ee5c83c) is 8.0M, max 75.3M, 67.3M free. Apr 30 00:43:34.215809 systemd-modules-load[252]: Inserted module 'overlay' Apr 30 00:43:34.270172 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:43:34.250178 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 30 00:43:34.272993 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:43:34.286397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:43:34.312378 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:43:34.321396 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:43:34.325144 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:34.361322 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:34.366702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:34.385663 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:43:34.391885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:34.407528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:43:34.421882 dracut-cmdline[286]: dracut-dracut-053 Apr 30 00:43:34.430745 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:43:34.495286 systemd-resolved[290]: Positive Trust Anchors: Apr 30 00:43:34.497118 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:43:34.497186 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:43:34.616387 kernel: SCSI subsystem initialized Apr 30 00:43:34.624206 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:43:34.637198 kernel: iscsi: registered transport (tcp) Apr 30 00:43:34.661297 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:43:34.661372 kernel: QLogic iSCSI HBA Driver Apr 30 00:43:34.732105 kernel: random: crng init done Apr 30 00:43:34.730405 systemd-resolved[290]: Defaulting to hostname 'linux'. Apr 30 00:43:34.734687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:43:34.739398 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:34.770135 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:43:34.780414 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:43:34.817121 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:43:34.817198 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:43:34.820109 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:43:34.889138 kernel: raid6: neonx8 gen() 6672 MB/s Apr 30 00:43:34.906125 kernel: raid6: neonx4 gen() 6424 MB/s Apr 30 00:43:34.923118 kernel: raid6: neonx2 gen() 5409 MB/s Apr 30 00:43:34.940130 kernel: raid6: neonx1 gen() 3916 MB/s Apr 30 00:43:34.957123 kernel: raid6: int64x8 gen() 3773 MB/s Apr 30 00:43:34.974126 kernel: raid6: int64x4 gen() 3669 MB/s Apr 30 00:43:34.991132 kernel: raid6: int64x2 gen() 3556 MB/s Apr 30 00:43:35.009091 kernel: raid6: int64x1 gen() 2712 MB/s Apr 30 00:43:35.009175 kernel: raid6: using algorithm neonx8 gen() 6672 MB/s Apr 30 00:43:35.027022 kernel: raid6: .... xor() 4757 MB/s, rmw enabled Apr 30 00:43:35.027124 kernel: raid6: using neon recovery algorithm Apr 30 00:43:35.036407 kernel: xor: measuring software checksum speed Apr 30 00:43:35.036495 kernel: 8regs : 11030 MB/sec Apr 30 00:43:35.037632 kernel: 32regs : 11705 MB/sec Apr 30 00:43:35.038942 kernel: arm64_neon : 9493 MB/sec Apr 30 00:43:35.038986 kernel: xor: using function: 32regs (11705 MB/sec) Apr 30 00:43:35.126113 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:43:35.148281 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:43:35.162418 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:35.208483 systemd-udevd[471]: Using default interface naming scheme 'v255'. Apr 30 00:43:35.218538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:35.241457 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:43:35.276738 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Apr 30 00:43:35.346206 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:43:35.363438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:43:35.489805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:35.502339 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:43:35.551982 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:43:35.558813 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:43:35.562553 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:35.565281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:43:35.580513 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:43:35.641450 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:43:35.753337 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:43:35.753423 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 30 00:43:35.771464 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 00:43:35.771753 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 00:43:35.774427 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:5b:5f:ba:6b:d1 Apr 30 00:43:35.747442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:43:35.747766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:35.753732 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:35.756050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:43:35.756764 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:35.759958 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:35.771626 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:35.807845 (udev-worker)[528]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:43:35.811612 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 00:43:35.811651 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 00:43:35.822150 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 00:43:35.830744 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:35.839537 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:43:35.839604 kernel: GPT:9289727 != 16777215 Apr 30 00:43:35.840847 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:43:35.841677 kernel: GPT:9289727 != 16777215 Apr 30 00:43:35.842752 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:43:35.843651 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:35.844465 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:35.876851 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:35.923107 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (544) Apr 30 00:43:35.990152 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (531) Apr 30 00:43:36.036943 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 00:43:36.055248 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 00:43:36.083889 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 00:43:36.099137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 00:43:36.102036 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 00:43:36.122442 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:43:36.135611 disk-uuid[662]: Primary Header is updated. Apr 30 00:43:36.135611 disk-uuid[662]: Secondary Entries is updated. Apr 30 00:43:36.135611 disk-uuid[662]: Secondary Header is updated. Apr 30 00:43:36.144144 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:36.152114 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:36.162234 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:37.167116 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:37.168221 disk-uuid[663]: The operation has completed successfully. Apr 30 00:43:37.359421 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:43:37.361618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:43:37.430342 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:43:37.439976 sh[1005]: Success Apr 30 00:43:37.465134 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:43:37.564273 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:43:37.579358 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:43:37.591906 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:43:37.622792 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:43:37.622868 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:37.624853 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:43:37.626307 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:43:37.627483 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:43:37.736135 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:43:37.761742 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:43:37.766479 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:43:37.775415 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:43:37.788315 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:43:37.824538 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:37.824636 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:37.826627 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:37.834112 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:37.852994 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:43:37.858159 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:37.869665 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:43:37.879578 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:43:37.994132 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:43:38.007398 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:43:38.070653 systemd-networkd[1197]: lo: Link UP Apr 30 00:43:38.071197 systemd-networkd[1197]: lo: Gained carrier Apr 30 00:43:38.075174 systemd-networkd[1197]: Enumeration completed Apr 30 00:43:38.075344 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:43:38.078356 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:38.078364 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:43:38.081489 systemd[1]: Reached target network.target - Network. Apr 30 00:43:38.094638 systemd-networkd[1197]: eth0: Link UP Apr 30 00:43:38.094652 systemd-networkd[1197]: eth0: Gained carrier Apr 30 00:43:38.094671 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:38.122205 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.18.219/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 00:43:38.355603 ignition[1116]: Ignition 2.19.0 Apr 30 00:43:38.356166 ignition[1116]: Stage: fetch-offline Apr 30 00:43:38.356737 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:38.356762 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:38.357233 ignition[1116]: Ignition finished successfully Apr 30 00:43:38.366243 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:43:38.381436 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:43:38.403766 ignition[1206]: Ignition 2.19.0 Apr 30 00:43:38.403795 ignition[1206]: Stage: fetch Apr 30 00:43:38.404713 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:38.404739 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:38.404897 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:38.413447 ignition[1206]: PUT result: OK Apr 30 00:43:38.416689 ignition[1206]: parsed url from cmdline: "" Apr 30 00:43:38.416731 ignition[1206]: no config URL provided Apr 30 00:43:38.416751 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:43:38.416776 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:43:38.416812 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:38.418510 ignition[1206]: PUT result: OK Apr 30 00:43:38.418584 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 00:43:38.424496 ignition[1206]: GET result: OK Apr 30 00:43:38.432598 unknown[1206]: fetched base config from "system" Apr 30 00:43:38.424639 ignition[1206]: parsing config with SHA512: acb8973430a8ad5673e8ee5c827d119b3f53f724402c374e76435cf25e2807b31cf395fa64a65dd642a336745d4e08568d6679a7ed0ed1c1222eb17d006f0489 Apr 30 00:43:38.432614 unknown[1206]: fetched base config from "system" Apr 30 00:43:38.433327 ignition[1206]: fetch: fetch complete Apr 30 00:43:38.432627 unknown[1206]: fetched user config from "aws" Apr 30 00:43:38.433339 ignition[1206]: fetch: fetch passed Apr 30 00:43:38.441354 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:43:38.433413 ignition[1206]: Ignition finished successfully Apr 30 00:43:38.460472 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:43:38.485462 ignition[1212]: Ignition 2.19.0 Apr 30 00:43:38.485492 ignition[1212]: Stage: kargs Apr 30 00:43:38.486441 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:38.486466 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:38.486615 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:38.494023 ignition[1212]: PUT result: OK Apr 30 00:43:38.498899 ignition[1212]: kargs: kargs passed Apr 30 00:43:38.499043 ignition[1212]: Ignition finished successfully Apr 30 00:43:38.504639 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:43:38.517390 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:43:38.542739 ignition[1218]: Ignition 2.19.0 Apr 30 00:43:38.543273 ignition[1218]: Stage: disks Apr 30 00:43:38.543962 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:38.543987 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:38.544192 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:38.546587 ignition[1218]: PUT result: OK Apr 30 00:43:38.557177 ignition[1218]: disks: disks passed Apr 30 00:43:38.557279 ignition[1218]: Ignition finished successfully Apr 30 00:43:38.561976 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:43:38.563912 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:43:38.564676 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:43:38.564993 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:43:38.565616 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:43:38.565936 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:43:38.581799 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:43:38.625188 systemd-fsck[1227]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:43:38.630000 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:43:38.640327 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:43:38.740136 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:43:38.741106 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:43:38.744559 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:43:38.756262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:43:38.765198 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:43:38.770470 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:43:38.776878 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:43:38.778013 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:43:38.798101 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1246) Apr 30 00:43:38.802308 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:38.802357 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:38.804210 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:38.808817 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:43:38.817099 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:38.822395 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:43:38.829577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:43:39.187861 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:43:39.207238 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:43:39.216206 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:43:39.225304 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:43:39.435361 systemd-networkd[1197]: eth0: Gained IPv6LL Apr 30 00:43:39.576760 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:43:39.584309 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:43:39.591393 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:43:39.617606 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:43:39.622102 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:39.660769 ignition[1359]: INFO : Ignition 2.19.0 Apr 30 00:43:39.660769 ignition[1359]: INFO : Stage: mount Apr 30 00:43:39.665234 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:39.665234 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:39.665234 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:39.662684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:43:39.684366 ignition[1359]: INFO : PUT result: OK Apr 30 00:43:39.690663 ignition[1359]: INFO : mount: mount passed Apr 30 00:43:39.692303 ignition[1359]: INFO : Ignition finished successfully Apr 30 00:43:39.696319 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:43:39.708342 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:43:39.752127 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:43:39.777100 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1370) Apr 30 00:43:39.781106 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:39.781150 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:39.781177 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:39.788106 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:39.791193 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:43:39.829273 ignition[1387]: INFO : Ignition 2.19.0 Apr 30 00:43:39.829273 ignition[1387]: INFO : Stage: files Apr 30 00:43:39.833020 ignition[1387]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:39.833020 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:39.833020 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:39.839969 ignition[1387]: INFO : PUT result: OK Apr 30 00:43:39.843918 ignition[1387]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:43:39.847734 ignition[1387]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:43:39.847734 ignition[1387]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:43:39.893204 ignition[1387]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:43:39.895933 ignition[1387]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:43:39.898359 ignition[1387]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:43:39.897710 unknown[1387]: wrote ssh authorized keys file for user: core Apr 30 00:43:39.913510 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:43:39.917475 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Apr 30 00:43:39.987475 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:43:40.389709 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Apr 30 00:43:40.393371 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:43:40.393371 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 00:43:40.868823 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:43:41.024148 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:43:41.028276 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:43:41.028276 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:43:41.028276 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:43:41.028276 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:43:41.028276 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:43:41.028276 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:43:41.028276 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:43:41.053107 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:43:41.053107 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:43:41.053107 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:43:41.053107 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:43:41.053107 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:43:41.053107 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:43:41.053107 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Apr 30 00:43:41.311459 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:43:41.683380 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Apr 30 00:43:41.683380 ignition[1387]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 00:43:41.705095 ignition[1387]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:43:41.708925 ignition[1387]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:43:41.708925 ignition[1387]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 00:43:41.708925 ignition[1387]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:43:41.718353 ignition[1387]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:43:41.718353 ignition[1387]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:43:41.718353 ignition[1387]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:43:41.718353 ignition[1387]: INFO : files: files passed Apr 30 00:43:41.718353 ignition[1387]: INFO : Ignition finished successfully Apr 30 00:43:41.733217 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:43:41.749322 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:43:41.754627 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:43:41.766744 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:43:41.768738 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:43:41.805219 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:41.808674 initrd-setup-root-after-ignition[1415]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:41.808674 initrd-setup-root-after-ignition[1415]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:41.815244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:43:41.824592 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:43:41.834388 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:43:41.897915 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:43:41.898398 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:43:41.905537 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:43:41.907881 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:43:41.910332 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:43:41.919374 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:43:41.963178 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:43:41.977474 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:43:41.999473 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:42.003935 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:42.008364 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:43:42.010275 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:43:42.010510 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:43:42.013408 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:43:42.016745 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:43:42.018789 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:43:42.019644 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:43:42.020272 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:43:42.020554 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:43:42.020850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:43:42.021477 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:43:42.021762 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:43:42.022358 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:43:42.022613 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:43:42.022849 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:43:42.023927 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:42.024855 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:42.025431 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:43:42.040319 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:42.040591 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:43:42.040860 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:43:42.045641 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:43:42.045878 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:43:42.048582 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:43:42.048786 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:43:42.073912 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:43:42.093251 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:43:42.100375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:42.120457 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:43:42.122851 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:43:42.126991 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:42.133561 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:43:42.133847 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:43:42.143268 ignition[1439]: INFO : Ignition 2.19.0 Apr 30 00:43:42.143268 ignition[1439]: INFO : Stage: umount Apr 30 00:43:42.143268 ignition[1439]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:42.143268 ignition[1439]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:42.143268 ignition[1439]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:42.143268 ignition[1439]: INFO : PUT result: OK Apr 30 00:43:42.158143 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:43:42.161943 ignition[1439]: INFO : umount: umount passed Apr 30 00:43:42.163555 ignition[1439]: INFO : Ignition finished successfully Apr 30 00:43:42.163991 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:43:42.174717 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:43:42.175039 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:43:42.182338 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:43:42.182470 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:43:42.184550 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:43:42.184656 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:43:42.187971 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:43:42.188229 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:43:42.199999 systemd[1]: Stopped target network.target - Network. Apr 30 00:43:42.202021 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:43:42.202183 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:43:42.205104 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:43:42.218881 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:43:42.223545 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:42.232522 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:43:42.247915 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:43:42.256229 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:43:42.256347 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:43:42.262107 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:43:42.262205 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:43:42.268994 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:43:42.269144 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:43:42.271180 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:43:42.271293 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:43:42.273781 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:43:42.276663 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:43:42.286458 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:43:42.288758 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:43:42.290412 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:43:42.297623 systemd-networkd[1197]: eth0: DHCPv6 lease lost Apr 30 00:43:42.300339 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:43:42.300524 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:43:42.308841 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:43:42.310197 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:43:42.318028 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:43:42.318582 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:43:42.326694 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:43:42.326859 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:42.346255 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:43:42.349465 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:43:42.349599 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:43:42.352627 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:43:42.352743 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:42.355428 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:43:42.355547 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:42.358402 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:43:42.358509 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:42.362215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:42.399912 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:43:42.400250 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:42.416319 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:43:42.417342 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:42.423654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:43:42.423760 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:42.425935 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:43:42.426047 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:43:42.429186 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:43:42.429298 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:43:42.445723 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:43:42.445843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:42.466360 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:43:42.470329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:43:42.470464 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:42.473367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:43:42.473479 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:42.477032 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:43:42.477278 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:43:42.512422 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:43:42.512838 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:43:42.521789 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:43:42.535971 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:43:42.554330 systemd[1]: Switching root. Apr 30 00:43:42.594192 systemd-journald[251]: Journal stopped Apr 30 00:43:45.395774 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 30 00:43:45.395935 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:43:45.395987 kernel: SELinux: policy capability open_perms=1 Apr 30 00:43:45.396023 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:43:45.396054 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:43:45.396135 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:43:45.396181 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:43:45.396216 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:43:45.396246 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:43:45.396275 kernel: audit: type=1403 audit(1745973823.261:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:43:45.396309 systemd[1]: Successfully loaded SELinux policy in 105.520ms. Apr 30 00:43:45.396360 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.839ms. Apr 30 00:43:45.396397 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:43:45.396433 systemd[1]: Detected virtualization amazon. Apr 30 00:43:45.396467 systemd[1]: Detected architecture arm64. Apr 30 00:43:45.396503 systemd[1]: Detected first boot. Apr 30 00:43:45.396538 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:43:45.396572 zram_generator::config[1483]: No configuration found. Apr 30 00:43:45.396610 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:43:45.396642 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:43:45.396673 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:43:45.396706 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:43:45.396739 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:43:45.396777 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:43:45.396811 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:43:45.396843 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:43:45.396876 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:43:45.396909 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:43:45.396939 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:43:45.396969 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:43:45.397014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:45.397054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:45.397139 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:43:45.397176 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:43:45.397209 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:43:45.397245 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:43:45.397279 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:43:45.397316 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:45.397350 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:43:45.397382 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:43:45.397424 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:43:45.397457 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:43:45.397488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:45.397523 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:43:45.397557 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:43:45.397591 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:43:45.397624 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:43:45.397655 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:43:45.397693 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:45.397725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:45.397759 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:45.397791 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:43:45.397834 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:43:45.397870 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:43:45.397901 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:43:45.397934 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:43:45.397966 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:43:45.398010 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:43:45.398045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:43:45.398145 systemd[1]: Reached target machines.target - Containers. Apr 30 00:43:45.398181 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:43:45.398213 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:45.398248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:43:45.398279 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:43:45.398310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:43:45.398349 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:43:45.398381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:43:45.398414 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:43:45.398447 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:43:45.398480 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:43:45.398511 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:43:45.398541 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:43:45.398574 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:43:45.398608 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:43:45.398646 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:43:45.398677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:43:45.398707 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:43:45.398739 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:43:45.398769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:43:45.398801 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:43:45.398832 systemd[1]: Stopped verity-setup.service. Apr 30 00:43:45.398864 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:43:45.398909 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:43:45.398946 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:43:45.398979 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:43:45.399010 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:43:45.399042 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:43:45.399130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:45.399168 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:43:45.399200 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:43:45.399235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:43:45.399266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:43:45.399295 kernel: loop: module loaded Apr 30 00:43:45.399326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:43:45.399357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:43:45.399387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:45.399416 kernel: fuse: init (API version 7.39) Apr 30 00:43:45.399453 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:43:45.399484 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:43:45.399516 kernel: ACPI: bus type drm_connector registered Apr 30 00:43:45.399546 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:43:45.399576 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:43:45.399607 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:43:45.399643 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:43:45.399754 systemd-journald[1561]: Collecting audit messages is disabled. Apr 30 00:43:45.399827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:43:45.399865 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:43:45.399897 systemd-journald[1561]: Journal started Apr 30 00:43:45.399954 systemd-journald[1561]: Runtime Journal (/run/log/journal/ec297a5594ca0ed8114d1eca1ee5c83c) is 8.0M, max 75.3M, 67.3M free. Apr 30 00:43:44.719221 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:43:45.405236 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:43:44.792118 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 00:43:44.792946 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:43:45.448795 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:43:45.462010 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:43:45.472289 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:43:45.484289 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:43:45.488349 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:43:45.488413 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:43:45.493816 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:43:45.507011 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:43:45.518438 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:43:45.520876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:45.533432 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:43:45.541337 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:43:45.546319 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:43:45.553437 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:43:45.556114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:43:45.568381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:43:45.584414 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:43:45.606444 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:43:45.614054 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:43:45.617099 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:43:45.620805 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:43:45.654922 systemd-journald[1561]: Time spent on flushing to /var/log/journal/ec297a5594ca0ed8114d1eca1ee5c83c is 88.632ms for 910 entries. Apr 30 00:43:45.654922 systemd-journald[1561]: System Journal (/var/log/journal/ec297a5594ca0ed8114d1eca1ee5c83c) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:43:45.765304 systemd-journald[1561]: Received client request to flush runtime journal. Apr 30 00:43:45.765392 kernel: loop0: detected capacity change from 0 to 114432 Apr 30 00:43:45.694795 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:43:45.699675 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:43:45.708828 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:43:45.780386 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:43:45.784776 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:43:45.797411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:45.813313 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:43:45.816967 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:43:45.831370 kernel: loop1: detected capacity change from 0 to 52536 Apr 30 00:43:45.847621 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:43:45.859541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:43:45.876972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:45.905377 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:43:45.928121 kernel: loop2: detected capacity change from 0 to 201592 Apr 30 00:43:45.939998 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:43:45.989307 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Apr 30 00:43:45.989343 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Apr 30 00:43:46.001574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:46.057533 kernel: loop3: detected capacity change from 0 to 114328 Apr 30 00:43:46.182134 kernel: loop4: detected capacity change from 0 to 114432 Apr 30 00:43:46.210445 kernel: loop5: detected capacity change from 0 to 52536 Apr 30 00:43:46.237141 kernel: loop6: detected capacity change from 0 to 201592 Apr 30 00:43:46.272133 kernel: loop7: detected capacity change from 0 to 114328 Apr 30 00:43:46.282027 (sd-merge)[1637]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 00:43:46.283959 (sd-merge)[1637]: Merged extensions into '/usr'. Apr 30 00:43:46.299249 systemd[1]: Reloading requested from client PID 1612 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:43:46.299477 systemd[1]: Reloading... Apr 30 00:43:46.454406 zram_generator::config[1660]: No configuration found. Apr 30 00:43:46.814327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:43:46.955619 systemd[1]: Reloading finished in 654 ms. Apr 30 00:43:47.027446 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:43:47.039642 systemd[1]: Starting ensure-sysext.service... Apr 30 00:43:47.056365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:43:47.068730 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:43:47.082492 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:47.112596 systemd[1]: Reloading requested from client PID 1714 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:43:47.112624 systemd[1]: Reloading... Apr 30 00:43:47.148539 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:43:47.152747 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:43:47.158513 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:43:47.160107 systemd-tmpfiles[1715]: ACLs are not supported, ignoring. Apr 30 00:43:47.160306 systemd-tmpfiles[1715]: ACLs are not supported, ignoring. Apr 30 00:43:47.170262 systemd-udevd[1717]: Using default interface naming scheme 'v255'. Apr 30 00:43:47.187511 systemd-tmpfiles[1715]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:43:47.187531 systemd-tmpfiles[1715]: Skipping /boot Apr 30 00:43:47.249050 systemd-tmpfiles[1715]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:43:47.249099 systemd-tmpfiles[1715]: Skipping /boot Apr 30 00:43:47.425831 zram_generator::config[1770]: No configuration found. Apr 30 00:43:47.481945 (udev-worker)[1738]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:43:47.512022 ldconfig[1607]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:43:47.820956 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:43:47.924133 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1741) Apr 30 00:43:48.020947 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 00:43:48.022725 systemd[1]: Reloading finished in 909 ms. Apr 30 00:43:48.057294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:48.062162 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:43:48.071366 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:48.154189 systemd[1]: Finished ensure-sysext.service. Apr 30 00:43:48.172917 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:43:48.194736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 00:43:48.205410 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:43:48.217680 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:43:48.222191 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:48.227467 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:43:48.232464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:43:48.238611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:43:48.245540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:43:48.258539 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:43:48.260972 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:48.265519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:43:48.273542 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:43:48.296709 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:43:48.310660 lvm[1917]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:43:48.313598 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:43:48.315883 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:43:48.323568 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:43:48.329698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:48.372740 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:43:48.375218 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:43:48.426334 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:43:48.429559 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:43:48.431192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:43:48.445510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:43:48.446493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:43:48.449836 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:43:48.450891 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:43:48.451304 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:43:48.455270 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:43:48.507196 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:43:48.511721 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:43:48.518713 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:43:48.533620 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:43:48.537397 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:43:48.542204 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:43:48.547699 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:48.567805 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:43:48.568322 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:43:48.585879 lvm[1955]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:43:48.587499 augenrules[1957]: No rules Apr 30 00:43:48.598156 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:43:48.627242 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:43:48.644643 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:43:48.677992 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:43:48.709881 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:48.805699 systemd-networkd[1930]: lo: Link UP Apr 30 00:43:48.805728 systemd-networkd[1930]: lo: Gained carrier Apr 30 00:43:48.809329 systemd-networkd[1930]: Enumeration completed Apr 30 00:43:48.809566 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:43:48.815371 systemd-networkd[1930]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:48.815393 systemd-networkd[1930]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:43:48.819976 systemd-networkd[1930]: eth0: Link UP Apr 30 00:43:48.820614 systemd-networkd[1930]: eth0: Gained carrier Apr 30 00:43:48.820682 systemd-networkd[1930]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:48.822405 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:43:48.827090 systemd-resolved[1931]: Positive Trust Anchors: Apr 30 00:43:48.827889 systemd-resolved[1931]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:43:48.827976 systemd-resolved[1931]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:43:48.830254 systemd-networkd[1930]: eth0: DHCPv4 address 172.31.18.219/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 00:43:48.847517 systemd-resolved[1931]: Defaulting to hostname 'linux'. Apr 30 00:43:48.852028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:43:48.854580 systemd[1]: Reached target network.target - Network. Apr 30 00:43:48.858284 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:48.860849 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:43:48.863318 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:43:48.865939 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:43:48.869278 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:43:48.871818 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:43:48.875035 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:43:48.877653 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:43:48.877712 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:43:48.879992 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:43:48.886321 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:43:48.891482 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:43:48.901702 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:43:48.905317 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:43:48.907993 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:43:48.910319 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:43:48.913167 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:43:48.913244 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:43:48.920367 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:43:48.932509 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:43:48.940555 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:43:48.946515 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:43:48.951757 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:43:48.954146 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:43:48.959445 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:43:48.975683 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 00:43:48.981861 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:43:49.004461 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 00:43:49.013153 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:43:49.021414 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:43:49.032335 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:43:49.036784 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:43:49.037789 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:43:49.058435 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:43:49.066424 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:43:49.107320 jq[1981]: false Apr 30 00:43:49.129112 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:43:49.129540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:43:49.183933 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:43:49.187301 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:43:49.190199 ntpd[1984]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:34 UTC 2025 (1): Starting Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:34 UTC 2025 (1): Starting Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: ---------------------------------------------------- Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: ntp-4 is maintained by Network Time Foundation, Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: corporation. Support and training for ntp-4 are Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: available at https://www.nwtime.org/support Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: ---------------------------------------------------- Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: proto: precision = 0.096 usec (-23) Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: basedate set to 2025-04-17 Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: gps base set to 2025-04-20 (week 2363) Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: Listen normally on 3 eth0 172.31.18.219:123 Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: Listen normally on 4 lo [::1]:123 Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: bind(21) AF_INET6 fe80::45b:5fff:feba:6bd1%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: unable to create socket on eth0 (5) for fe80::45b:5fff:feba:6bd1%2#123 Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: failed to init interface for address fe80::45b:5fff:feba:6bd1%2 Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: Listening on routing socket on fd #21 for interface updates Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:49.211888 ntpd[1984]: 30 Apr 00:43:49 ntpd[1984]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:49.225352 update_engine[1992]: I20250430 00:43:49.210654 1992 main.cc:92] Flatcar Update Engine starting Apr 30 00:43:49.196397 (ntainerd)[2002]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:43:49.190262 ntpd[1984]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 00:43:49.190284 ntpd[1984]: ---------------------------------------------------- Apr 30 00:43:49.245913 jq[1993]: true Apr 30 00:43:49.190303 ntpd[1984]: ntp-4 is maintained by Network Time Foundation, Apr 30 00:43:49.190323 ntpd[1984]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 00:43:49.190342 ntpd[1984]: corporation. Support and training for ntp-4 are Apr 30 00:43:49.190362 ntpd[1984]: available at https://www.nwtime.org/support Apr 30 00:43:49.190387 ntpd[1984]: ---------------------------------------------------- Apr 30 00:43:49.194471 ntpd[1984]: proto: precision = 0.096 usec (-23) Apr 30 00:43:49.195141 ntpd[1984]: basedate set to 2025-04-17 Apr 30 00:43:49.195182 ntpd[1984]: gps base set to 2025-04-20 (week 2363) Apr 30 00:43:49.199838 ntpd[1984]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 00:43:49.201774 ntpd[1984]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 00:43:49.202360 ntpd[1984]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 00:43:49.202448 ntpd[1984]: Listen normally on 3 eth0 172.31.18.219:123 Apr 30 00:43:49.202531 ntpd[1984]: Listen normally on 4 lo [::1]:123 Apr 30 00:43:49.202632 ntpd[1984]: bind(21) AF_INET6 fe80::45b:5fff:feba:6bd1%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:43:49.202676 ntpd[1984]: unable to create socket on eth0 (5) for fe80::45b:5fff:feba:6bd1%2#123 Apr 30 00:43:49.202705 ntpd[1984]: failed to init interface for address fe80::45b:5fff:feba:6bd1%2 Apr 30 00:43:49.202771 ntpd[1984]: Listening on routing socket on fd #21 for interface updates Apr 30 00:43:49.210724 ntpd[1984]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:49.210787 ntpd[1984]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:49.275182 tar[1995]: linux-arm64/LICENSE Apr 30 00:43:49.275182 tar[1995]: linux-arm64/helm Apr 30 00:43:49.264356 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:43:49.263952 dbus-daemon[1980]: [system] SELinux support is enabled Apr 30 00:43:49.276612 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:43:49.286207 extend-filesystems[1982]: Found loop4 Apr 30 00:43:49.286207 extend-filesystems[1982]: Found loop5 Apr 30 00:43:49.286207 extend-filesystems[1982]: Found loop6 Apr 30 00:43:49.286207 extend-filesystems[1982]: Found loop7 Apr 30 00:43:49.286207 extend-filesystems[1982]: Found nvme0n1 Apr 30 00:43:49.286207 extend-filesystems[1982]: Found nvme0n1p1 Apr 30 00:43:49.276674 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:43:49.280327 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:43:49.280372 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:43:49.317401 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:43:49.325385 extend-filesystems[1982]: Found nvme0n1p2 Apr 30 00:43:49.325385 extend-filesystems[1982]: Found nvme0n1p3 Apr 30 00:43:49.325385 extend-filesystems[1982]: Found usr Apr 30 00:43:49.325385 extend-filesystems[1982]: Found nvme0n1p4 Apr 30 00:43:49.325385 extend-filesystems[1982]: Found nvme0n1p6 Apr 30 00:43:49.325385 extend-filesystems[1982]: Found nvme0n1p7 Apr 30 00:43:49.325385 extend-filesystems[1982]: Found nvme0n1p9 Apr 30 00:43:49.325385 extend-filesystems[1982]: Checking size of /dev/nvme0n1p9 Apr 30 00:43:49.317924 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:43:49.328605 dbus-daemon[1980]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1930 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 00:43:49.387882 update_engine[1992]: I20250430 00:43:49.355479 1992 update_check_scheduler.cc:74] Next update check in 7m34s Apr 30 00:43:49.387943 jq[2014]: true Apr 30 00:43:49.361568 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 00:43:49.381751 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:43:49.393151 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 00:43:49.407503 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:43:49.431041 coreos-metadata[1979]: Apr 30 00:43:49.430 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 00:43:49.441225 coreos-metadata[1979]: Apr 30 00:43:49.440 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 00:43:49.454261 coreos-metadata[1979]: Apr 30 00:43:49.448 INFO Fetch successful Apr 30 00:43:49.454261 coreos-metadata[1979]: Apr 30 00:43:49.448 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 00:43:49.455001 coreos-metadata[1979]: Apr 30 00:43:49.454 INFO Fetch successful Apr 30 00:43:49.455001 coreos-metadata[1979]: Apr 30 00:43:49.454 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 00:43:49.455996 coreos-metadata[1979]: Apr 30 00:43:49.455 INFO Fetch successful Apr 30 00:43:49.455996 coreos-metadata[1979]: Apr 30 00:43:49.455 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 00:43:49.457756 coreos-metadata[1979]: Apr 30 00:43:49.457 INFO Fetch successful Apr 30 00:43:49.457756 coreos-metadata[1979]: Apr 30 00:43:49.457 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 00:43:49.465151 extend-filesystems[1982]: Resized partition /dev/nvme0n1p9 Apr 30 00:43:49.467327 coreos-metadata[1979]: Apr 30 00:43:49.465 INFO Fetch failed with 404: resource not found Apr 30 00:43:49.467327 coreos-metadata[1979]: Apr 30 00:43:49.465 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 00:43:49.473939 coreos-metadata[1979]: Apr 30 00:43:49.473 INFO Fetch successful Apr 30 00:43:49.473939 coreos-metadata[1979]: Apr 30 00:43:49.473 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 00:43:49.481680 coreos-metadata[1979]: Apr 30 00:43:49.481 INFO Fetch successful Apr 30 00:43:49.481680 coreos-metadata[1979]: Apr 30 00:43:49.481 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 00:43:49.481832 extend-filesystems[2035]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:43:49.494163 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 00:43:49.494439 coreos-metadata[1979]: Apr 30 00:43:49.488 INFO Fetch successful Apr 30 00:43:49.494439 coreos-metadata[1979]: Apr 30 00:43:49.488 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 00:43:49.496275 coreos-metadata[1979]: Apr 30 00:43:49.494 INFO Fetch successful Apr 30 00:43:49.496275 coreos-metadata[1979]: Apr 30 00:43:49.494 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 00:43:49.506108 coreos-metadata[1979]: Apr 30 00:43:49.500 INFO Fetch successful Apr 30 00:43:49.665260 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 00:43:49.669181 locksmithd[2030]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:43:49.691116 extend-filesystems[2035]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 00:43:49.691116 extend-filesystems[2035]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:43:49.691116 extend-filesystems[2035]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 00:43:49.708356 extend-filesystems[1982]: Resized filesystem in /dev/nvme0n1p9 Apr 30 00:43:49.698650 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:43:49.699096 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:43:49.711365 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:43:49.717829 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:43:49.738356 systemd-logind[1991]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:43:49.744621 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:43:49.738406 systemd-logind[1991]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 30 00:43:49.739528 systemd-logind[1991]: New seat seat0. Apr 30 00:43:49.745458 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:43:49.749526 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:43:49.778177 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1754) Apr 30 00:43:49.849873 systemd[1]: Starting sshkeys.service... Apr 30 00:43:49.879214 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:43:49.889207 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:43:49.904028 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:43:50.049233 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 00:43:50.050416 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 00:43:50.060311 dbus-daemon[1980]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 00:43:50.119169 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 00:43:50.174214 polkitd[2111]: Started polkitd version 121 Apr 30 00:43:50.193941 ntpd[1984]: bind(24) AF_INET6 fe80::45b:5fff:feba:6bd1%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:43:50.194759 ntpd[1984]: 30 Apr 00:43:50 ntpd[1984]: bind(24) AF_INET6 fe80::45b:5fff:feba:6bd1%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:43:50.194759 ntpd[1984]: 30 Apr 00:43:50 ntpd[1984]: unable to create socket on eth0 (6) for fe80::45b:5fff:feba:6bd1%2#123 Apr 30 00:43:50.194759 ntpd[1984]: 30 Apr 00:43:50 ntpd[1984]: failed to init interface for address fe80::45b:5fff:feba:6bd1%2 Apr 30 00:43:50.194011 ntpd[1984]: unable to create socket on eth0 (6) for fe80::45b:5fff:feba:6bd1%2#123 Apr 30 00:43:50.194040 ntpd[1984]: failed to init interface for address fe80::45b:5fff:feba:6bd1%2 Apr 30 00:43:50.210871 polkitd[2111]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 00:43:50.211032 polkitd[2111]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 00:43:50.224809 polkitd[2111]: Finished loading, compiling and executing 2 rules Apr 30 00:43:50.237795 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 00:43:50.240966 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 00:43:50.246199 polkitd[2111]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 00:43:50.251784 coreos-metadata[2091]: Apr 30 00:43:50.251 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 00:43:50.254117 coreos-metadata[2091]: Apr 30 00:43:50.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 00:43:50.260385 coreos-metadata[2091]: Apr 30 00:43:50.258 INFO Fetch successful Apr 30 00:43:50.260385 coreos-metadata[2091]: Apr 30 00:43:50.258 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 00:43:50.262929 coreos-metadata[2091]: Apr 30 00:43:50.262 INFO Fetch successful Apr 30 00:43:50.268203 unknown[2091]: wrote ssh authorized keys file for user: core Apr 30 00:43:50.314163 containerd[2002]: time="2025-04-30T00:43:50.312213107Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:43:50.368588 update-ssh-keys[2161]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:43:50.386227 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:43:50.398335 systemd-hostnamed[2026]: Hostname set to (transient) Apr 30 00:43:50.398534 systemd-resolved[1931]: System hostname changed to 'ip-172-31-18-219'. Apr 30 00:43:50.410779 systemd[1]: Finished sshkeys.service. Apr 30 00:43:50.532575 sshd_keygen[2023]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:43:50.552109 containerd[2002]: time="2025-04-30T00:43:50.550541592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.555475 containerd[2002]: time="2025-04-30T00:43:50.555396132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.555730 containerd[2002]: time="2025-04-30T00:43:50.555677604Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:43:50.555929 containerd[2002]: time="2025-04-30T00:43:50.555859104Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:43:50.556444 containerd[2002]: time="2025-04-30T00:43:50.556383024Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:43:50.559218 containerd[2002]: time="2025-04-30T00:43:50.558224484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.559218 containerd[2002]: time="2025-04-30T00:43:50.558457224Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.559218 containerd[2002]: time="2025-04-30T00:43:50.558493224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.559218 containerd[2002]: time="2025-04-30T00:43:50.558843348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.559218 containerd[2002]: time="2025-04-30T00:43:50.558884928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.559218 containerd[2002]: time="2025-04-30T00:43:50.558932496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.559218 containerd[2002]: time="2025-04-30T00:43:50.558960684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.560517 containerd[2002]: time="2025-04-30T00:43:50.559853340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.560517 containerd[2002]: time="2025-04-30T00:43:50.560430792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.560955 containerd[2002]: time="2025-04-30T00:43:50.560906688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.561166 containerd[2002]: time="2025-04-30T00:43:50.561128292Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:43:50.561518 containerd[2002]: time="2025-04-30T00:43:50.561474792Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:43:50.561816 containerd[2002]: time="2025-04-30T00:43:50.561772236Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.570658188Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.570892356Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.570955452Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.570992004Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.571026060Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.571420728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.571980396Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.573486996Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.573563940Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.573603432Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.573663168Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.573701448Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.573762828Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.574197 containerd[2002]: time="2025-04-30T00:43:50.573831516Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.573875664Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.573936684Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.573972648Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.574032828Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.574140720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.574223844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.574284000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.574319496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.574871 containerd[2002]: time="2025-04-30T00:43:50.574376160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.576964 containerd[2002]: time="2025-04-30T00:43:50.574410852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.576964 containerd[2002]: time="2025-04-30T00:43:50.576229020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.576964 containerd[2002]: time="2025-04-30T00:43:50.576317148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.576964 containerd[2002]: time="2025-04-30T00:43:50.576356784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.578123 containerd[2002]: time="2025-04-30T00:43:50.577278888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.578123 containerd[2002]: time="2025-04-30T00:43:50.577701516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.578123 containerd[2002]: time="2025-04-30T00:43:50.577740900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.578123 containerd[2002]: time="2025-04-30T00:43:50.577798608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.578123 containerd[2002]: time="2025-04-30T00:43:50.577840944Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:43:50.578123 containerd[2002]: time="2025-04-30T00:43:50.577920588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.578123 containerd[2002]: time="2025-04-30T00:43:50.577998156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.578123 containerd[2002]: time="2025-04-30T00:43:50.578123088Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:43:50.579504 containerd[2002]: time="2025-04-30T00:43:50.579266964Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:43:50.579657 containerd[2002]: time="2025-04-30T00:43:50.579532812Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:43:50.579657 containerd[2002]: time="2025-04-30T00:43:50.579564060Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:43:50.581093 containerd[2002]: time="2025-04-30T00:43:50.579890340Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:43:50.581093 containerd[2002]: time="2025-04-30T00:43:50.579971604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.581093 containerd[2002]: time="2025-04-30T00:43:50.580032780Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:43:50.581093 containerd[2002]: time="2025-04-30T00:43:50.580106844Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:43:50.581093 containerd[2002]: time="2025-04-30T00:43:50.580160100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.583133 containerd[2002]: time="2025-04-30T00:43:50.582911172Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:43:50.585258 containerd[2002]: time="2025-04-30T00:43:50.585160320Z" level=info msg="Connect containerd service" Apr 30 00:43:50.585416 containerd[2002]: time="2025-04-30T00:43:50.585344832Z" level=info msg="using legacy CRI server" Apr 30 00:43:50.585514 containerd[2002]: time="2025-04-30T00:43:50.585407784Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:43:50.586466 containerd[2002]: time="2025-04-30T00:43:50.586112424Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:43:50.590703 containerd[2002]: time="2025-04-30T00:43:50.590622588Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:43:50.592668 containerd[2002]: time="2025-04-30T00:43:50.591859956Z" level=info msg="Start subscribing containerd event" Apr 30 00:43:50.592668 containerd[2002]: time="2025-04-30T00:43:50.591990060Z" level=info msg="Start recovering state" Apr 30 00:43:50.593881 containerd[2002]: time="2025-04-30T00:43:50.593799516Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:43:50.594038 containerd[2002]: time="2025-04-30T00:43:50.593954364Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:43:50.597329 containerd[2002]: time="2025-04-30T00:43:50.597260220Z" level=info msg="Start event monitor" Apr 30 00:43:50.597329 containerd[2002]: time="2025-04-30T00:43:50.597321732Z" level=info msg="Start snapshots syncer" Apr 30 00:43:50.597535 containerd[2002]: time="2025-04-30T00:43:50.597348468Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:43:50.597535 containerd[2002]: time="2025-04-30T00:43:50.597368532Z" level=info msg="Start streaming server" Apr 30 00:43:50.598696 containerd[2002]: time="2025-04-30T00:43:50.597604020Z" level=info msg="containerd successfully booted in 0.295373s" Apr 30 00:43:50.597731 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:43:50.636753 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:43:50.647844 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:43:50.668971 systemd[1]: Started sshd@0-172.31.18.219:22-147.75.109.163:54272.service - OpenSSH per-connection server daemon (147.75.109.163:54272). Apr 30 00:43:50.687780 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:43:50.690237 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:43:50.700860 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:43:50.741307 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:43:50.756277 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:43:50.763269 systemd-networkd[1930]: eth0: Gained IPv6LL Apr 30 00:43:50.767291 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:43:50.772620 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:43:50.775869 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:43:50.781604 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:43:50.796477 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 00:43:50.808462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:43:50.819126 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:43:50.914184 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:43:50.943176 amazon-ssm-agent[2203]: Initializing new seelog logger Apr 30 00:43:50.943176 amazon-ssm-agent[2203]: New Seelog Logger Creation Complete Apr 30 00:43:50.943176 amazon-ssm-agent[2203]: 2025/04/30 00:43:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:50.943176 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:50.948049 amazon-ssm-agent[2203]: 2025/04/30 00:43:50 processing appconfig overrides Apr 30 00:43:50.950796 amazon-ssm-agent[2203]: 2025/04/30 00:43:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:50.950796 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:50.950966 amazon-ssm-agent[2203]: 2025/04/30 00:43:50 processing appconfig overrides Apr 30 00:43:50.952326 amazon-ssm-agent[2203]: 2025-04-30 00:43:50 INFO Proxy environment variables: Apr 30 00:43:50.953805 amazon-ssm-agent[2203]: 2025/04/30 00:43:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:50.953975 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:50.954977 amazon-ssm-agent[2203]: 2025/04/30 00:43:50 processing appconfig overrides Apr 30 00:43:50.967616 amazon-ssm-agent[2203]: 2025/04/30 00:43:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:50.967794 amazon-ssm-agent[2203]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:50.969585 amazon-ssm-agent[2203]: 2025/04/30 00:43:50 processing appconfig overrides Apr 30 00:43:51.030228 sshd[2193]: Accepted publickey for core from 147.75.109.163 port 54272 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:51.034314 sshd[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:51.053109 amazon-ssm-agent[2203]: 2025-04-30 00:43:50 INFO https_proxy: Apr 30 00:43:51.074246 systemd-logind[1991]: New session 1 of user core. Apr 30 00:43:51.079238 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:43:51.093665 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:43:51.150187 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:43:51.153360 amazon-ssm-agent[2203]: 2025-04-30 00:43:50 INFO http_proxy: Apr 30 00:43:51.166805 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:43:51.196592 (systemd)[2221]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:43:51.255354 amazon-ssm-agent[2203]: 2025-04-30 00:43:50 INFO no_proxy: Apr 30 00:43:51.260224 tar[1995]: linux-arm64/README.md Apr 30 00:43:51.311827 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:43:51.355210 amazon-ssm-agent[2203]: 2025-04-30 00:43:50 INFO Checking if agent identity type OnPrem can be assumed Apr 30 00:43:51.455182 amazon-ssm-agent[2203]: 2025-04-30 00:43:50 INFO Checking if agent identity type EC2 can be assumed Apr 30 00:43:51.491056 systemd[2221]: Queued start job for default target default.target. Apr 30 00:43:51.500812 systemd[2221]: Created slice app.slice - User Application Slice. Apr 30 00:43:51.500886 systemd[2221]: Reached target paths.target - Paths. Apr 30 00:43:51.500920 systemd[2221]: Reached target timers.target - Timers. Apr 30 00:43:51.505375 systemd[2221]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:43:51.538874 systemd[2221]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:43:51.539227 systemd[2221]: Reached target sockets.target - Sockets. Apr 30 00:43:51.539265 systemd[2221]: Reached target basic.target - Basic System. Apr 30 00:43:51.539371 systemd[2221]: Reached target default.target - Main User Target. Apr 30 00:43:51.539445 systemd[2221]: Startup finished in 322ms. Apr 30 00:43:51.540773 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:43:51.553202 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO Agent will take identity from EC2 Apr 30 00:43:51.555103 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:43:51.652179 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:51.753145 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:51.787939 systemd[1]: Started sshd@1-172.31.18.219:22-147.75.109.163:54274.service - OpenSSH per-connection server daemon (147.75.109.163:54274). Apr 30 00:43:51.851183 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:51.951027 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 00:43:51.963224 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 30 00:43:51.963224 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 00:43:51.963224 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 00:43:51.963224 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [Registrar] Starting registrar module Apr 30 00:43:51.963854 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 00:43:51.963854 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [EC2Identity] EC2 registration was successful. Apr 30 00:43:51.963854 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [CredentialRefresher] credentialRefresher has started Apr 30 00:43:51.963854 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 00:43:51.963854 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 00:43:52.050884 amazon-ssm-agent[2203]: 2025-04-30 00:43:51 INFO [CredentialRefresher] Next credential rotation will be in 31.683316514333335 minutes Apr 30 00:43:52.081486 sshd[2238]: Accepted publickey for core from 147.75.109.163 port 54274 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:52.084309 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:52.096030 systemd-logind[1991]: New session 2 of user core. Apr 30 00:43:52.103422 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:43:52.278755 sshd[2238]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:52.286836 systemd[1]: sshd@1-172.31.18.219:22-147.75.109.163:54274.service: Deactivated successfully. Apr 30 00:43:52.292481 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:43:52.294301 systemd-logind[1991]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:43:52.296795 systemd-logind[1991]: Removed session 2. Apr 30 00:43:52.339666 systemd[1]: Started sshd@2-172.31.18.219:22-147.75.109.163:54280.service - OpenSSH per-connection server daemon (147.75.109.163:54280). Apr 30 00:43:52.617204 sshd[2245]: Accepted publickey for core from 147.75.109.163 port 54280 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:52.620083 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:52.630469 systemd-logind[1991]: New session 3 of user core. Apr 30 00:43:52.643409 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:43:52.823307 sshd[2245]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:52.830562 systemd[1]: sshd@2-172.31.18.219:22-147.75.109.163:54280.service: Deactivated successfully. Apr 30 00:43:52.834326 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:43:52.836486 systemd-logind[1991]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:43:52.839193 systemd-logind[1991]: Removed session 3. Apr 30 00:43:52.992722 amazon-ssm-agent[2203]: 2025-04-30 00:43:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 00:43:53.093485 amazon-ssm-agent[2203]: 2025-04-30 00:43:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2252) started Apr 30 00:43:53.190973 ntpd[1984]: Listen normally on 7 eth0 [fe80::45b:5fff:feba:6bd1%2]:123 Apr 30 00:43:53.191574 ntpd[1984]: 30 Apr 00:43:53 ntpd[1984]: Listen normally on 7 eth0 [fe80::45b:5fff:feba:6bd1%2]:123 Apr 30 00:43:53.194957 amazon-ssm-agent[2203]: 2025-04-30 00:43:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 00:43:53.286319 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:43:53.289633 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:43:53.292296 systemd[1]: Startup finished in 1.198s (kernel) + 9.402s (initrd) + 10.134s (userspace) = 20.736s. Apr 30 00:43:53.304797 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:43:54.433712 kubelet[2266]: E0430 00:43:54.433615 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:43:54.438161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:43:54.438506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:43:54.439201 systemd[1]: kubelet.service: Consumed 1.330s CPU time. Apr 30 00:44:02.885630 systemd[1]: Started sshd@3-172.31.18.219:22-147.75.109.163:51518.service - OpenSSH per-connection server daemon (147.75.109.163:51518). Apr 30 00:44:03.143250 sshd[2279]: Accepted publickey for core from 147.75.109.163 port 51518 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:03.145977 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:03.154173 systemd-logind[1991]: New session 4 of user core. Apr 30 00:44:03.165381 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:44:03.341308 sshd[2279]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:03.347731 systemd[1]: sshd@3-172.31.18.219:22-147.75.109.163:51518.service: Deactivated successfully. Apr 30 00:44:03.351460 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:44:03.353732 systemd-logind[1991]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:44:03.356191 systemd-logind[1991]: Removed session 4. Apr 30 00:44:03.396601 systemd[1]: Started sshd@4-172.31.18.219:22-147.75.109.163:51532.service - OpenSSH per-connection server daemon (147.75.109.163:51532). Apr 30 00:44:03.654656 sshd[2286]: Accepted publickey for core from 147.75.109.163 port 51532 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:03.657547 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:03.666178 systemd-logind[1991]: New session 5 of user core. Apr 30 00:44:03.678379 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:44:03.845211 sshd[2286]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:03.852731 systemd[1]: sshd@4-172.31.18.219:22-147.75.109.163:51532.service: Deactivated successfully. Apr 30 00:44:03.857313 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:44:03.859043 systemd-logind[1991]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:44:03.861306 systemd-logind[1991]: Removed session 5. Apr 30 00:44:03.904645 systemd[1]: Started sshd@5-172.31.18.219:22-147.75.109.163:51534.service - OpenSSH per-connection server daemon (147.75.109.163:51534). Apr 30 00:44:04.177213 sshd[2293]: Accepted publickey for core from 147.75.109.163 port 51534 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:04.180385 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:04.190012 systemd-logind[1991]: New session 6 of user core. Apr 30 00:44:04.197437 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:44:04.379235 sshd[2293]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:04.385847 systemd[1]: sshd@5-172.31.18.219:22-147.75.109.163:51534.service: Deactivated successfully. Apr 30 00:44:04.390217 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:44:04.391820 systemd-logind[1991]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:44:04.393859 systemd-logind[1991]: Removed session 6. Apr 30 00:44:04.432605 systemd[1]: Started sshd@6-172.31.18.219:22-147.75.109.163:51546.service - OpenSSH per-connection server daemon (147.75.109.163:51546). Apr 30 00:44:04.443472 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:44:04.451572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:04.704673 sshd[2300]: Accepted publickey for core from 147.75.109.163 port 51546 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:04.709100 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:04.722192 systemd-logind[1991]: New session 7 of user core. Apr 30 00:44:04.731374 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:44:04.795417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:04.804619 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:04.889243 kubelet[2311]: E0430 00:44:04.889159 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:04.898006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:04.898631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:04.913265 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:44:04.913965 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:04.930415 sudo[2317]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:04.969513 sshd[2300]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:04.978292 systemd[1]: sshd@6-172.31.18.219:22-147.75.109.163:51546.service: Deactivated successfully. Apr 30 00:44:04.982337 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:44:04.984396 systemd-logind[1991]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:44:04.986732 systemd-logind[1991]: Removed session 7. Apr 30 00:44:05.030531 systemd[1]: Started sshd@7-172.31.18.219:22-147.75.109.163:51554.service - OpenSSH per-connection server daemon (147.75.109.163:51554). Apr 30 00:44:05.288870 sshd[2323]: Accepted publickey for core from 147.75.109.163 port 51554 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:05.291982 sshd[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:05.301394 systemd-logind[1991]: New session 8 of user core. Apr 30 00:44:05.308328 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:44:05.452890 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:44:05.453642 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:05.460457 sudo[2327]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:05.471656 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:44:05.472388 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:05.498570 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:44:05.502920 auditctl[2330]: No rules Apr 30 00:44:05.503646 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:44:05.505159 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:44:05.512694 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:44:05.566445 augenrules[2348]: No rules Apr 30 00:44:05.569822 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:44:05.572232 sudo[2326]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:05.611408 sshd[2323]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:05.616971 systemd-logind[1991]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:44:05.618226 systemd[1]: sshd@7-172.31.18.219:22-147.75.109.163:51554.service: Deactivated successfully. Apr 30 00:44:05.621889 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:44:05.623842 systemd-logind[1991]: Removed session 8. Apr 30 00:44:05.665585 systemd[1]: Started sshd@8-172.31.18.219:22-147.75.109.163:51564.service - OpenSSH per-connection server daemon (147.75.109.163:51564). Apr 30 00:44:05.921608 sshd[2356]: Accepted publickey for core from 147.75.109.163 port 51564 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:05.924141 sshd[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:05.932412 systemd-logind[1991]: New session 9 of user core. Apr 30 00:44:05.939383 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:44:06.082853 sudo[2359]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:44:06.083667 sudo[2359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:06.666589 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:44:06.669326 (dockerd)[2376]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:44:07.170668 dockerd[2376]: time="2025-04-30T00:44:07.170553416Z" level=info msg="Starting up" Apr 30 00:44:07.381835 dockerd[2376]: time="2025-04-30T00:44:07.381136532Z" level=info msg="Loading containers: start." Apr 30 00:44:07.594374 kernel: Initializing XFRM netlink socket Apr 30 00:44:07.657058 (udev-worker)[2400]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:44:07.750531 systemd-networkd[1930]: docker0: Link UP Apr 30 00:44:07.775492 dockerd[2376]: time="2025-04-30T00:44:07.775437131Z" level=info msg="Loading containers: done." Apr 30 00:44:07.800567 dockerd[2376]: time="2025-04-30T00:44:07.800495970Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:44:07.800932 dockerd[2376]: time="2025-04-30T00:44:07.800657535Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:44:07.800932 dockerd[2376]: time="2025-04-30T00:44:07.800847302Z" level=info msg="Daemon has completed initialization" Apr 30 00:44:07.860148 dockerd[2376]: time="2025-04-30T00:44:07.859732806Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:44:07.861671 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:44:09.226824 containerd[2002]: time="2025-04-30T00:44:09.226723417Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Apr 30 00:44:09.867404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount222948785.mount: Deactivated successfully. Apr 30 00:44:11.248763 containerd[2002]: time="2025-04-30T00:44:11.248671565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.251207 containerd[2002]: time="2025-04-30T00:44:11.251103348Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" Apr 30 00:44:11.252284 containerd[2002]: time="2025-04-30T00:44:11.252181838Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.258663 containerd[2002]: time="2025-04-30T00:44:11.258527209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.261431 containerd[2002]: time="2025-04-30T00:44:11.261363267Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.034543964s" Apr 30 00:44:11.262262 containerd[2002]: time="2025-04-30T00:44:11.261637288Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" Apr 30 00:44:11.262730 containerd[2002]: time="2025-04-30T00:44:11.262631911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Apr 30 00:44:12.724036 containerd[2002]: time="2025-04-30T00:44:12.723852304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.726000 containerd[2002]: time="2025-04-30T00:44:12.725940904Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" Apr 30 00:44:12.727107 containerd[2002]: time="2025-04-30T00:44:12.727035820Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.732893 containerd[2002]: time="2025-04-30T00:44:12.732785032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.735572 containerd[2002]: time="2025-04-30T00:44:12.735289180Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.472392455s" Apr 30 00:44:12.735572 containerd[2002]: time="2025-04-30T00:44:12.735393340Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" Apr 30 00:44:12.736334 containerd[2002]: time="2025-04-30T00:44:12.736268260Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Apr 30 00:44:13.923007 containerd[2002]: time="2025-04-30T00:44:13.922941630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:13.924257 containerd[2002]: time="2025-04-30T00:44:13.924136542Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" Apr 30 00:44:13.926032 containerd[2002]: time="2025-04-30T00:44:13.925915014Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:13.937420 containerd[2002]: time="2025-04-30T00:44:13.937327062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:13.942984 containerd[2002]: time="2025-04-30T00:44:13.942804534Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.206304938s" Apr 30 00:44:13.942984 containerd[2002]: time="2025-04-30T00:44:13.942886698Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" Apr 30 00:44:13.943786 containerd[2002]: time="2025-04-30T00:44:13.943733778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Apr 30 00:44:15.149651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:44:15.158426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:15.280247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202499762.mount: Deactivated successfully. Apr 30 00:44:15.551451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:15.568888 (kubelet)[2595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:15.672685 kubelet[2595]: E0430 00:44:15.672425 2595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:15.678629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:15.679053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:16.065597 containerd[2002]: time="2025-04-30T00:44:16.065516704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.068025 containerd[2002]: time="2025-04-30T00:44:16.067929340Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" Apr 30 00:44:16.069618 containerd[2002]: time="2025-04-30T00:44:16.069505900Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.073949 containerd[2002]: time="2025-04-30T00:44:16.073841104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.077043 containerd[2002]: time="2025-04-30T00:44:16.075743200Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 2.131735798s" Apr 30 00:44:16.077043 containerd[2002]: time="2025-04-30T00:44:16.075823660Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" Apr 30 00:44:16.077578 containerd[2002]: time="2025-04-30T00:44:16.077529556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Apr 30 00:44:16.664163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239857513.mount: Deactivated successfully. Apr 30 00:44:17.886149 containerd[2002]: time="2025-04-30T00:44:17.885620409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:17.888330 containerd[2002]: time="2025-04-30T00:44:17.888238797Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Apr 30 00:44:17.889599 containerd[2002]: time="2025-04-30T00:44:17.889505373Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:17.898874 containerd[2002]: time="2025-04-30T00:44:17.898799517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:17.900521 containerd[2002]: time="2025-04-30T00:44:17.899975193Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.822220937s" Apr 30 00:44:17.900521 containerd[2002]: time="2025-04-30T00:44:17.900052005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Apr 30 00:44:17.901448 containerd[2002]: time="2025-04-30T00:44:17.901156833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 30 00:44:18.448919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620151158.mount: Deactivated successfully. Apr 30 00:44:18.457783 containerd[2002]: time="2025-04-30T00:44:18.457693688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:18.459554 containerd[2002]: time="2025-04-30T00:44:18.459485648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 30 00:44:18.460331 containerd[2002]: time="2025-04-30T00:44:18.460195628Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:18.465256 containerd[2002]: time="2025-04-30T00:44:18.465127676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:18.467488 containerd[2002]: time="2025-04-30T00:44:18.467226848Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 566.006787ms" Apr 30 00:44:18.467488 containerd[2002]: time="2025-04-30T00:44:18.467295980Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 30 00:44:18.468987 containerd[2002]: time="2025-04-30T00:44:18.468011264Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Apr 30 00:44:19.042521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679958614.mount: Deactivated successfully. Apr 30 00:44:20.410662 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 00:44:21.095778 containerd[2002]: time="2025-04-30T00:44:21.095696373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:21.098206 containerd[2002]: time="2025-04-30T00:44:21.098130765Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" Apr 30 00:44:21.100817 containerd[2002]: time="2025-04-30T00:44:21.100739589Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:21.107441 containerd[2002]: time="2025-04-30T00:44:21.107362305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:21.110181 containerd[2002]: time="2025-04-30T00:44:21.109895769Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.641785133s" Apr 30 00:44:21.110181 containerd[2002]: time="2025-04-30T00:44:21.109960389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Apr 30 00:44:25.929364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:44:25.940245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:26.281489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:26.291727 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:26.415816 kubelet[2744]: E0430 00:44:26.415731 2744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:26.420460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:26.420781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:27.191379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:27.207948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:27.268882 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-9.scope)... Apr 30 00:44:27.268928 systemd[1]: Reloading... Apr 30 00:44:27.538226 zram_generator::config[2801]: No configuration found. Apr 30 00:44:27.783276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:44:27.967330 systemd[1]: Reloading finished in 697 ms. Apr 30 00:44:28.075055 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:44:28.075320 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:44:28.075940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:28.085735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:28.404378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:28.415650 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:44:28.489631 kubelet[2861]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:28.489631 kubelet[2861]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:44:28.489631 kubelet[2861]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:28.490222 kubelet[2861]: I0430 00:44:28.489745 2861 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:44:29.196279 kubelet[2861]: I0430 00:44:29.196224 2861 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:44:29.196587 kubelet[2861]: I0430 00:44:29.196564 2861 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:44:29.197871 kubelet[2861]: I0430 00:44:29.197823 2861 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:44:29.251269 kubelet[2861]: E0430 00:44:29.251218 2861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.219:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:29.253605 kubelet[2861]: I0430 00:44:29.253537 2861 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:44:29.269793 kubelet[2861]: E0430 00:44:29.269739 2861 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:44:29.270626 kubelet[2861]: I0430 00:44:29.270130 2861 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:44:29.276105 kubelet[2861]: I0430 00:44:29.276032 2861 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:44:29.278618 kubelet[2861]: I0430 00:44:29.278510 2861 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:44:29.279896 kubelet[2861]: I0430 00:44:29.278827 2861 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-219","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:44:29.279896 kubelet[2861]: I0430 00:44:29.279249 2861 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:44:29.279896 kubelet[2861]: I0430 00:44:29.279271 2861 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:44:29.279896 kubelet[2861]: I0430 00:44:29.279548 2861 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:29.286022 kubelet[2861]: I0430 00:44:29.285964 2861 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:44:29.286426 kubelet[2861]: I0430 00:44:29.286400 2861 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:44:29.286553 kubelet[2861]: I0430 00:44:29.286534 2861 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:44:29.286792 kubelet[2861]: I0430 00:44:29.286668 2861 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:44:29.294722 kubelet[2861]: I0430 00:44:29.294662 2861 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:44:29.295745 kubelet[2861]: I0430 00:44:29.295676 2861 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:44:29.296275 kubelet[2861]: W0430 00:44:29.296241 2861 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:44:29.297746 kubelet[2861]: I0430 00:44:29.297695 2861 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:44:29.297996 kubelet[2861]: I0430 00:44:29.297965 2861 server.go:1287] "Started kubelet" Apr 30 00:44:29.298488 kubelet[2861]: W0430 00:44:29.298414 2861 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-219&limit=500&resourceVersion=0": dial tcp 172.31.18.219:6443: connect: connection refused Apr 30 00:44:29.298979 kubelet[2861]: E0430 00:44:29.298716 2861 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-219&limit=500&resourceVersion=0\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:29.310840 kubelet[2861]: I0430 00:44:29.310798 2861 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:44:29.313111 kubelet[2861]: W0430 00:44:29.312408 2861 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.219:6443: connect: connection refused Apr 30 00:44:29.313111 kubelet[2861]: E0430 00:44:29.312525 2861 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:29.320550 kubelet[2861]: I0430 00:44:29.320446 2861 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:44:29.324374 kubelet[2861]: I0430 00:44:29.324253 2861 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:44:29.330000 kubelet[2861]: I0430 00:44:29.329250 2861 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:44:29.330000 kubelet[2861]: I0430 00:44:29.329689 2861 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:44:29.330353 kubelet[2861]: I0430 00:44:29.330308 2861 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:44:29.331411 kubelet[2861]: E0430 00:44:29.311618 2861 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.219:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.219:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-219.183af1fbca9dedc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-219,UID:ip-172-31-18-219,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-219,},FirstTimestamp:2025-04-30 00:44:29.297921474 +0000 UTC m=+0.876226049,LastTimestamp:2025-04-30 00:44:29.297921474 +0000 UTC m=+0.876226049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-219,}" Apr 30 00:44:29.331760 kubelet[2861]: I0430 00:44:29.331729 2861 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:44:29.333190 kubelet[2861]: E0430 00:44:29.332639 2861 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-219\" not found" Apr 30 00:44:29.333559 kubelet[2861]: I0430 00:44:29.333531 2861 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:44:29.341357 kubelet[2861]: I0430 00:44:29.340532 2861 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:44:29.341357 kubelet[2861]: E0430 00:44:29.340659 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-219?timeout=10s\": dial tcp 172.31.18.219:6443: connect: connection refused" interval="200ms" Apr 30 00:44:29.341357 kubelet[2861]: I0430 00:44:29.340696 2861 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:44:29.341357 kubelet[2861]: I0430 00:44:29.340757 2861 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:44:29.345614 kubelet[2861]: E0430 00:44:29.345545 2861 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:44:29.347269 kubelet[2861]: W0430 00:44:29.347183 2861 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.219:6443: connect: connection refused Apr 30 00:44:29.347457 kubelet[2861]: E0430 00:44:29.347277 2861 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:29.349447 kubelet[2861]: I0430 00:44:29.349367 2861 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:44:29.366612 kubelet[2861]: I0430 00:44:29.366523 2861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:44:29.371059 kubelet[2861]: I0430 00:44:29.370474 2861 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:44:29.371059 kubelet[2861]: I0430 00:44:29.370520 2861 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:44:29.371059 kubelet[2861]: I0430 00:44:29.370552 2861 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:44:29.371059 kubelet[2861]: I0430 00:44:29.370566 2861 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:44:29.371059 kubelet[2861]: E0430 00:44:29.370650 2861 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:44:29.385273 kubelet[2861]: W0430 00:44:29.385209 2861 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.219:6443: connect: connection refused Apr 30 00:44:29.386097 kubelet[2861]: E0430 00:44:29.385777 2861 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:29.401555 kubelet[2861]: I0430 00:44:29.401399 2861 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:44:29.401555 kubelet[2861]: I0430 00:44:29.401433 2861 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:44:29.401555 kubelet[2861]: I0430 00:44:29.401467 2861 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:29.407597 kubelet[2861]: I0430 00:44:29.407542 2861 policy_none.go:49] "None policy: Start" Apr 30 00:44:29.407597 kubelet[2861]: I0430 00:44:29.407590 2861 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:44:29.407780 kubelet[2861]: I0430 00:44:29.407615 2861 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:44:29.420364 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:44:29.432955 kubelet[2861]: E0430 00:44:29.432892 2861 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-219\" not found" Apr 30 00:44:29.440736 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:44:29.447856 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:44:29.461839 kubelet[2861]: I0430 00:44:29.460782 2861 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:44:29.461839 kubelet[2861]: I0430 00:44:29.461125 2861 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:44:29.461839 kubelet[2861]: I0430 00:44:29.461149 2861 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:44:29.462682 kubelet[2861]: I0430 00:44:29.462655 2861 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:44:29.464897 kubelet[2861]: E0430 00:44:29.464861 2861 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:44:29.465297 kubelet[2861]: E0430 00:44:29.465270 2861 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-219\" not found" Apr 30 00:44:29.489165 systemd[1]: Created slice kubepods-burstable-podfba061df7fdcc1c007ce2c29a5cce6d5.slice - libcontainer container kubepods-burstable-podfba061df7fdcc1c007ce2c29a5cce6d5.slice. Apr 30 00:44:29.506103 kubelet[2861]: E0430 00:44:29.505835 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:29.511756 systemd[1]: Created slice kubepods-burstable-pod25779bee1f60c053196ebdabd4997c49.slice - libcontainer container kubepods-burstable-pod25779bee1f60c053196ebdabd4997c49.slice. Apr 30 00:44:29.523467 kubelet[2861]: E0430 00:44:29.523400 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:29.528351 systemd[1]: Created slice kubepods-burstable-poda24183e07db9251d76a1bf255e7444d5.slice - libcontainer container kubepods-burstable-poda24183e07db9251d76a1bf255e7444d5.slice. Apr 30 00:44:29.532211 kubelet[2861]: E0430 00:44:29.532169 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:29.541883 kubelet[2861]: E0430 00:44:29.541828 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-219?timeout=10s\": dial tcp 172.31.18.219:6443: connect: connection refused" interval="400ms" Apr 30 00:44:29.564814 kubelet[2861]: I0430 00:44:29.564366 2861 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-219" Apr 30 00:44:29.564969 kubelet[2861]: E0430 00:44:29.564908 2861 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.219:6443/api/v1/nodes\": dial tcp 172.31.18.219:6443: connect: connection refused" node="ip-172-31-18-219" Apr 30 00:44:29.636478 kubelet[2861]: I0430 00:44:29.636404 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:29.636478 kubelet[2861]: I0430 00:44:29.636478 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:29.636686 kubelet[2861]: I0430 00:44:29.636520 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:29.636686 kubelet[2861]: I0430 00:44:29.636558 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:29.636686 kubelet[2861]: I0430 00:44:29.636596 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a24183e07db9251d76a1bf255e7444d5-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-219\" (UID: \"a24183e07db9251d76a1bf255e7444d5\") " pod="kube-system/kube-scheduler-ip-172-31-18-219" Apr 30 00:44:29.636686 kubelet[2861]: I0430 00:44:29.636629 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fba061df7fdcc1c007ce2c29a5cce6d5-ca-certs\") pod \"kube-apiserver-ip-172-31-18-219\" (UID: \"fba061df7fdcc1c007ce2c29a5cce6d5\") " pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:29.636686 kubelet[2861]: I0430 00:44:29.636663 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fba061df7fdcc1c007ce2c29a5cce6d5-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-219\" (UID: \"fba061df7fdcc1c007ce2c29a5cce6d5\") " pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:29.636952 kubelet[2861]: I0430 00:44:29.636697 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fba061df7fdcc1c007ce2c29a5cce6d5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-219\" (UID: \"fba061df7fdcc1c007ce2c29a5cce6d5\") " pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:29.636952 kubelet[2861]: I0430 00:44:29.636732 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:29.767715 kubelet[2861]: I0430 00:44:29.767203 2861 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-219" Apr 30 00:44:29.767715 kubelet[2861]: E0430 00:44:29.767658 2861 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.219:6443/api/v1/nodes\": dial tcp 172.31.18.219:6443: connect: connection refused" node="ip-172-31-18-219" Apr 30 00:44:29.808526 containerd[2002]: time="2025-04-30T00:44:29.808471640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-219,Uid:fba061df7fdcc1c007ce2c29a5cce6d5,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:29.826315 containerd[2002]: time="2025-04-30T00:44:29.825779301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-219,Uid:25779bee1f60c053196ebdabd4997c49,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:29.833606 containerd[2002]: time="2025-04-30T00:44:29.833534781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-219,Uid:a24183e07db9251d76a1bf255e7444d5,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:29.943854 kubelet[2861]: E0430 00:44:29.943749 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-219?timeout=10s\": dial tcp 172.31.18.219:6443: connect: connection refused" interval="800ms" Apr 30 00:44:30.170632 kubelet[2861]: I0430 00:44:30.170445 2861 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-219" Apr 30 00:44:30.171338 kubelet[2861]: E0430 00:44:30.171216 2861 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.219:6443/api/v1/nodes\": dial tcp 172.31.18.219:6443: connect: connection refused" node="ip-172-31-18-219" Apr 30 00:44:30.281382 kubelet[2861]: W0430 00:44:30.281252 2861 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.219:6443: connect: connection refused Apr 30 00:44:30.281382 kubelet[2861]: E0430 00:44:30.281365 2861 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.219:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:30.376376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2205541857.mount: Deactivated successfully. Apr 30 00:44:30.393859 containerd[2002]: time="2025-04-30T00:44:30.393761227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:30.398559 containerd[2002]: time="2025-04-30T00:44:30.398337055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:44:30.401719 containerd[2002]: time="2025-04-30T00:44:30.400740763Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:30.403892 containerd[2002]: time="2025-04-30T00:44:30.403713643Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:30.406583 containerd[2002]: time="2025-04-30T00:44:30.406487335Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:30.408621 containerd[2002]: time="2025-04-30T00:44:30.408540211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 00:44:30.409446 containerd[2002]: time="2025-04-30T00:44:30.409338811Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:44:30.415146 containerd[2002]: time="2025-04-30T00:44:30.414934903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:30.419386 containerd[2002]: time="2025-04-30T00:44:30.418620031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.260283ms" Apr 30 00:44:30.423034 containerd[2002]: time="2025-04-30T00:44:30.422787559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 589.134782ms" Apr 30 00:44:30.438624 kubelet[2861]: W0430 00:44:30.438465 2861 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.219:6443: connect: connection refused Apr 30 00:44:30.438624 kubelet[2861]: E0430 00:44:30.438549 2861 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:30.452475 containerd[2002]: time="2025-04-30T00:44:30.452404412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 626.506467ms" Apr 30 00:44:30.538668 kubelet[2861]: W0430 00:44:30.538476 2861 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.219:6443: connect: connection refused Apr 30 00:44:30.538668 kubelet[2861]: E0430 00:44:30.538557 2861 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:30.606092 kubelet[2861]: W0430 00:44:30.605990 2861 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-219&limit=500&resourceVersion=0": dial tcp 172.31.18.219:6443: connect: connection refused Apr 30 00:44:30.606226 kubelet[2861]: E0430 00:44:30.606113 2861 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-219&limit=500&resourceVersion=0\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:30.624141 containerd[2002]: time="2025-04-30T00:44:30.623778656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:30.625479 containerd[2002]: time="2025-04-30T00:44:30.624667760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:30.625479 containerd[2002]: time="2025-04-30T00:44:30.624840092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:30.626565 containerd[2002]: time="2025-04-30T00:44:30.626173388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:30.634775 containerd[2002]: time="2025-04-30T00:44:30.632293185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:30.634775 containerd[2002]: time="2025-04-30T00:44:30.634295637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:30.634775 containerd[2002]: time="2025-04-30T00:44:30.634329009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:30.634775 containerd[2002]: time="2025-04-30T00:44:30.634499721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:30.637855 containerd[2002]: time="2025-04-30T00:44:30.637603881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:30.640032 containerd[2002]: time="2025-04-30T00:44:30.639236241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:30.640032 containerd[2002]: time="2025-04-30T00:44:30.639331269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:30.641935 containerd[2002]: time="2025-04-30T00:44:30.640574661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:30.683255 systemd[1]: Started cri-containerd-c4e928f5ebfb15ea7a7b18139ef44e268316d6087d57952426fe7eee5d172abb.scope - libcontainer container c4e928f5ebfb15ea7a7b18139ef44e268316d6087d57952426fe7eee5d172abb. Apr 30 00:44:30.701603 systemd[1]: Started cri-containerd-df0451ac78dff6ce6b3b816d3690b1260614ff875ab5178db987cc3938daa8b8.scope - libcontainer container df0451ac78dff6ce6b3b816d3690b1260614ff875ab5178db987cc3938daa8b8. Apr 30 00:44:30.730417 systemd[1]: Started cri-containerd-3667808a603251b69cdc0476d09730c4e20efa5eb44a0c808f52e65ba7562eb3.scope - libcontainer container 3667808a603251b69cdc0476d09730c4e20efa5eb44a0c808f52e65ba7562eb3. Apr 30 00:44:30.747215 kubelet[2861]: E0430 00:44:30.746988 2861 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-219?timeout=10s\": dial tcp 172.31.18.219:6443: connect: connection refused" interval="1.6s" Apr 30 00:44:30.848555 containerd[2002]: time="2025-04-30T00:44:30.848474554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-219,Uid:a24183e07db9251d76a1bf255e7444d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4e928f5ebfb15ea7a7b18139ef44e268316d6087d57952426fe7eee5d172abb\"" Apr 30 00:44:30.862847 containerd[2002]: time="2025-04-30T00:44:30.860779270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-219,Uid:fba061df7fdcc1c007ce2c29a5cce6d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"df0451ac78dff6ce6b3b816d3690b1260614ff875ab5178db987cc3938daa8b8\"" Apr 30 00:44:30.872594 containerd[2002]: time="2025-04-30T00:44:30.872339338Z" level=info msg="CreateContainer within sandbox \"c4e928f5ebfb15ea7a7b18139ef44e268316d6087d57952426fe7eee5d172abb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:44:30.873547 containerd[2002]: time="2025-04-30T00:44:30.872552326Z" level=info msg="CreateContainer within sandbox \"df0451ac78dff6ce6b3b816d3690b1260614ff875ab5178db987cc3938daa8b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:44:30.896838 containerd[2002]: time="2025-04-30T00:44:30.896706658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-219,Uid:25779bee1f60c053196ebdabd4997c49,Namespace:kube-system,Attempt:0,} returns sandbox id \"3667808a603251b69cdc0476d09730c4e20efa5eb44a0c808f52e65ba7562eb3\"" Apr 30 00:44:30.905948 containerd[2002]: time="2025-04-30T00:44:30.905861590Z" level=info msg="CreateContainer within sandbox \"3667808a603251b69cdc0476d09730c4e20efa5eb44a0c808f52e65ba7562eb3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:44:30.921322 containerd[2002]: time="2025-04-30T00:44:30.921251374Z" level=info msg="CreateContainer within sandbox \"c4e928f5ebfb15ea7a7b18139ef44e268316d6087d57952426fe7eee5d172abb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e\"" Apr 30 00:44:30.926954 containerd[2002]: time="2025-04-30T00:44:30.925390030Z" level=info msg="StartContainer for \"5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e\"" Apr 30 00:44:30.933109 containerd[2002]: time="2025-04-30T00:44:30.933019726Z" level=info msg="CreateContainer within sandbox \"df0451ac78dff6ce6b3b816d3690b1260614ff875ab5178db987cc3938daa8b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4968d58a876c8de7ea8694dfc19de3fec52ba2cb358741bf49c6e12c7edf7d17\"" Apr 30 00:44:30.935275 containerd[2002]: time="2025-04-30T00:44:30.935120746Z" level=info msg="StartContainer for \"4968d58a876c8de7ea8694dfc19de3fec52ba2cb358741bf49c6e12c7edf7d17\"" Apr 30 00:44:30.953213 containerd[2002]: time="2025-04-30T00:44:30.953129914Z" level=info msg="CreateContainer within sandbox \"3667808a603251b69cdc0476d09730c4e20efa5eb44a0c808f52e65ba7562eb3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4\"" Apr 30 00:44:30.956548 containerd[2002]: time="2025-04-30T00:44:30.956481946Z" level=info msg="StartContainer for \"c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4\"" Apr 30 00:44:30.975559 kubelet[2861]: I0430 00:44:30.975498 2861 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-219" Apr 30 00:44:30.978148 kubelet[2861]: E0430 00:44:30.976052 2861 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.18.219:6443/api/v1/nodes\": dial tcp 172.31.18.219:6443: connect: connection refused" node="ip-172-31-18-219" Apr 30 00:44:31.005434 systemd[1]: Started cri-containerd-5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e.scope - libcontainer container 5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e. Apr 30 00:44:31.039443 systemd[1]: Started cri-containerd-4968d58a876c8de7ea8694dfc19de3fec52ba2cb358741bf49c6e12c7edf7d17.scope - libcontainer container 4968d58a876c8de7ea8694dfc19de3fec52ba2cb358741bf49c6e12c7edf7d17. Apr 30 00:44:31.070447 systemd[1]: Started cri-containerd-c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4.scope - libcontainer container c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4. Apr 30 00:44:31.151949 containerd[2002]: time="2025-04-30T00:44:31.151454455Z" level=info msg="StartContainer for \"5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e\" returns successfully" Apr 30 00:44:31.191957 containerd[2002]: time="2025-04-30T00:44:31.191428207Z" level=info msg="StartContainer for \"4968d58a876c8de7ea8694dfc19de3fec52ba2cb358741bf49c6e12c7edf7d17\" returns successfully" Apr 30 00:44:31.229443 containerd[2002]: time="2025-04-30T00:44:31.229358251Z" level=info msg="StartContainer for \"c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4\" returns successfully" Apr 30 00:44:31.260116 kubelet[2861]: E0430 00:44:31.259964 2861 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.219:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.219:6443: connect: connection refused" logger="UnhandledError" Apr 30 00:44:31.405934 kubelet[2861]: E0430 00:44:31.405862 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:31.419300 kubelet[2861]: E0430 00:44:31.419228 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:31.426038 kubelet[2861]: E0430 00:44:31.425985 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:32.428687 kubelet[2861]: E0430 00:44:32.428630 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:32.429672 kubelet[2861]: E0430 00:44:32.429628 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:32.532783 kubelet[2861]: E0430 00:44:32.532722 2861 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:32.579964 kubelet[2861]: I0430 00:44:32.579890 2861 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-219" Apr 30 00:44:34.562102 update_engine[1992]: I20250430 00:44:34.560113 1992 update_attempter.cc:509] Updating boot flags... Apr 30 00:44:34.726139 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3151) Apr 30 00:44:35.047109 kubelet[2861]: E0430 00:44:35.045519 2861 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-219\" not found" node="ip-172-31-18-219" Apr 30 00:44:35.106123 kubelet[2861]: I0430 00:44:35.099251 2861 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-18-219" Apr 30 00:44:35.106123 kubelet[2861]: E0430 00:44:35.105286 2861 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-219\": node \"ip-172-31-18-219\" not found" Apr 30 00:44:35.143471 kubelet[2861]: I0430 00:44:35.136310 2861 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:35.232886 kubelet[2861]: E0430 00:44:35.232433 2861 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-219.183af1fbca9dedc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-219,UID:ip-172-31-18-219,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-219,},FirstTimestamp:2025-04-30 00:44:29.297921474 +0000 UTC m=+0.876226049,LastTimestamp:2025-04-30 00:44:29.297921474 +0000 UTC m=+0.876226049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-219,}" Apr 30 00:44:35.251385 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3153) Apr 30 00:44:35.308705 kubelet[2861]: I0430 00:44:35.308185 2861 apiserver.go:52] "Watching apiserver" Apr 30 00:44:35.318177 kubelet[2861]: E0430 00:44:35.317649 2861 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-219\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:35.318354 kubelet[2861]: I0430 00:44:35.318202 2861 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-219" Apr 30 00:44:35.343121 kubelet[2861]: I0430 00:44:35.341496 2861 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:44:35.362856 kubelet[2861]: E0430 00:44:35.362359 2861 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-219\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-219" Apr 30 00:44:35.362856 kubelet[2861]: I0430 00:44:35.362412 2861 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:35.391623 kubelet[2861]: E0430 00:44:35.390057 2861 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-219\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:35.451112 kubelet[2861]: E0430 00:44:35.449974 2861 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-219.183af1fbcd7433fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-219,UID:ip-172-31-18-219,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-18-219,},FirstTimestamp:2025-04-30 00:44:29.345518586 +0000 UTC m=+0.923823113,LastTimestamp:2025-04-30 00:44:29.345518586 +0000 UTC m=+0.923823113,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-219,}" Apr 30 00:44:36.906305 kubelet[2861]: I0430 00:44:36.905859 2861 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-219" Apr 30 00:44:37.470970 kubelet[2861]: I0430 00:44:37.470907 2861 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:37.910389 systemd[1]: Reloading requested from client PID 3321 ('systemctl') (unit session-9.scope)... Apr 30 00:44:37.911112 systemd[1]: Reloading... Apr 30 00:44:38.199136 zram_generator::config[3364]: No configuration found. Apr 30 00:44:38.521228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:44:38.741578 systemd[1]: Reloading finished in 829 ms. Apr 30 00:44:38.841640 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:38.859626 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:44:38.861296 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:38.861414 systemd[1]: kubelet.service: Consumed 1.724s CPU time, 125.5M memory peak, 0B memory swap peak. Apr 30 00:44:38.871924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:39.236618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:39.253333 (kubelet)[3421]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:44:39.387927 kubelet[3421]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:39.387927 kubelet[3421]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 30 00:44:39.387927 kubelet[3421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:39.391049 kubelet[3421]: I0430 00:44:39.390622 3421 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:44:39.406245 kubelet[3421]: I0430 00:44:39.406173 3421 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Apr 30 00:44:39.406245 kubelet[3421]: I0430 00:44:39.406229 3421 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:44:39.406877 kubelet[3421]: I0430 00:44:39.406806 3421 server.go:954] "Client rotation is on, will bootstrap in background" Apr 30 00:44:39.410229 kubelet[3421]: I0430 00:44:39.409734 3421 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:44:39.416550 kubelet[3421]: I0430 00:44:39.414999 3421 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:44:39.419003 sudo[3435]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:44:39.422689 kubelet[3421]: E0430 00:44:39.422139 3421 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 00:44:39.422689 kubelet[3421]: I0430 00:44:39.422212 3421 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 00:44:39.422448 sudo[3435]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:44:39.442514 kubelet[3421]: I0430 00:44:39.442322 3421 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:44:39.443303 kubelet[3421]: I0430 00:44:39.442851 3421 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:44:39.443459 kubelet[3421]: I0430 00:44:39.442920 3421 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-219","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 00:44:39.443600 kubelet[3421]: I0430 00:44:39.443472 3421 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:44:39.443600 kubelet[3421]: I0430 00:44:39.443495 3421 container_manager_linux.go:304] "Creating device plugin manager" Apr 30 00:44:39.443600 kubelet[3421]: I0430 00:44:39.443586 3421 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:39.445110 kubelet[3421]: I0430 00:44:39.443834 3421 kubelet.go:446] "Attempting to sync node with API server" Apr 30 00:44:39.445110 kubelet[3421]: I0430 00:44:39.443874 3421 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:44:39.446392 kubelet[3421]: I0430 00:44:39.445864 3421 kubelet.go:352] "Adding apiserver pod source" Apr 30 00:44:39.446392 kubelet[3421]: I0430 00:44:39.445945 3421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:44:39.456273 kubelet[3421]: I0430 00:44:39.456219 3421 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:44:39.458564 kubelet[3421]: I0430 00:44:39.458478 3421 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:44:39.460118 kubelet[3421]: I0430 00:44:39.459293 3421 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 30 00:44:39.460118 kubelet[3421]: I0430 00:44:39.459356 3421 server.go:1287] "Started kubelet" Apr 30 00:44:39.476744 kubelet[3421]: I0430 00:44:39.476678 3421 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:44:39.492233 kubelet[3421]: I0430 00:44:39.491984 3421 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:44:39.498790 kubelet[3421]: I0430 00:44:39.495508 3421 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:44:39.500823 kubelet[3421]: I0430 00:44:39.500760 3421 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 00:44:39.504952 kubelet[3421]: I0430 00:44:39.501332 3421 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 30 00:44:39.504952 kubelet[3421]: E0430 00:44:39.501703 3421 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-219\" not found" Apr 30 00:44:39.511342 kubelet[3421]: I0430 00:44:39.511289 3421 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:44:39.511596 kubelet[3421]: I0430 00:44:39.511561 3421 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:44:39.529760 kubelet[3421]: I0430 00:44:39.529691 3421 server.go:490] "Adding debug handlers to kubelet server" Apr 30 00:44:39.536298 kubelet[3421]: I0430 00:44:39.536258 3421 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:44:39.544260 kubelet[3421]: I0430 00:44:39.544190 3421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:44:39.546671 kubelet[3421]: I0430 00:44:39.546626 3421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:44:39.547489 kubelet[3421]: I0430 00:44:39.546868 3421 status_manager.go:227] "Starting to sync pod status with apiserver" Apr 30 00:44:39.547489 kubelet[3421]: I0430 00:44:39.546918 3421 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 30 00:44:39.547489 kubelet[3421]: I0430 00:44:39.546936 3421 kubelet.go:2388] "Starting kubelet main sync loop" Apr 30 00:44:39.547489 kubelet[3421]: E0430 00:44:39.547010 3421 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:44:39.570782 kubelet[3421]: I0430 00:44:39.570732 3421 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:44:39.571202 kubelet[3421]: I0430 00:44:39.571162 3421 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:44:39.580449 kubelet[3421]: I0430 00:44:39.580412 3421 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:44:39.585116 kubelet[3421]: E0430 00:44:39.584290 3421 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:44:39.612581 kubelet[3421]: E0430 00:44:39.612534 3421 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-18-219\" not found" Apr 30 00:44:39.650936 kubelet[3421]: E0430 00:44:39.650475 3421 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:44:39.748148 kubelet[3421]: I0430 00:44:39.747159 3421 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 30 00:44:39.748527 kubelet[3421]: I0430 00:44:39.748494 3421 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 30 00:44:39.748673 kubelet[3421]: I0430 00:44:39.748652 3421 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:39.749633 kubelet[3421]: I0430 00:44:39.749046 3421 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:44:39.749633 kubelet[3421]: I0430 00:44:39.749111 3421 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:44:39.749633 kubelet[3421]: I0430 00:44:39.749167 3421 policy_none.go:49] "None policy: Start" Apr 30 00:44:39.749633 kubelet[3421]: I0430 00:44:39.749197 3421 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 30 00:44:39.749633 kubelet[3421]: I0430 00:44:39.749221 3421 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:44:39.749633 kubelet[3421]: I0430 00:44:39.749428 3421 state_mem.go:75] "Updated machine memory state" Apr 30 00:44:39.765009 kubelet[3421]: I0430 00:44:39.761962 3421 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:44:39.765009 kubelet[3421]: I0430 00:44:39.762300 3421 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 00:44:39.765009 kubelet[3421]: I0430 00:44:39.762322 3421 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:44:39.765009 kubelet[3421]: I0430 00:44:39.763037 3421 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:44:39.771253 kubelet[3421]: E0430 00:44:39.770214 3421 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 30 00:44:39.853356 kubelet[3421]: I0430 00:44:39.852500 3421 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-219" Apr 30 00:44:39.853356 kubelet[3421]: I0430 00:44:39.853223 3421 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:39.853775 kubelet[3421]: I0430 00:44:39.853741 3421 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:39.871297 kubelet[3421]: E0430 00:44:39.871233 3421 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-219\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-219" Apr 30 00:44:39.876226 kubelet[3421]: E0430 00:44:39.875495 3421 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-219\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:39.897189 kubelet[3421]: I0430 00:44:39.897141 3421 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-18-219" Apr 30 00:44:39.915225 kubelet[3421]: I0430 00:44:39.915165 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fba061df7fdcc1c007ce2c29a5cce6d5-ca-certs\") pod \"kube-apiserver-ip-172-31-18-219\" (UID: \"fba061df7fdcc1c007ce2c29a5cce6d5\") " pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:39.915438 kubelet[3421]: I0430 00:44:39.915235 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fba061df7fdcc1c007ce2c29a5cce6d5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-219\" (UID: \"fba061df7fdcc1c007ce2c29a5cce6d5\") " pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:39.915438 kubelet[3421]: I0430 00:44:39.915285 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:39.915438 kubelet[3421]: I0430 00:44:39.915323 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a24183e07db9251d76a1bf255e7444d5-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-219\" (UID: \"a24183e07db9251d76a1bf255e7444d5\") " pod="kube-system/kube-scheduler-ip-172-31-18-219" Apr 30 00:44:39.915438 kubelet[3421]: I0430 00:44:39.915361 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fba061df7fdcc1c007ce2c29a5cce6d5-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-219\" (UID: \"fba061df7fdcc1c007ce2c29a5cce6d5\") " pod="kube-system/kube-apiserver-ip-172-31-18-219" Apr 30 00:44:39.915438 kubelet[3421]: I0430 00:44:39.915396 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:39.916755 kubelet[3421]: I0430 00:44:39.915430 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:39.916755 kubelet[3421]: I0430 00:44:39.915469 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:39.916755 kubelet[3421]: I0430 00:44:39.915518 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25779bee1f60c053196ebdabd4997c49-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-219\" (UID: \"25779bee1f60c053196ebdabd4997c49\") " pod="kube-system/kube-controller-manager-ip-172-31-18-219" Apr 30 00:44:39.919460 kubelet[3421]: I0430 00:44:39.919049 3421 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-18-219" Apr 30 00:44:39.920105 kubelet[3421]: I0430 00:44:39.920017 3421 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-18-219" Apr 30 00:44:40.454212 kubelet[3421]: I0430 00:44:40.453540 3421 apiserver.go:52] "Watching apiserver" Apr 30 00:44:40.475480 sudo[3435]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:40.511524 kubelet[3421]: I0430 00:44:40.511439 3421 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:44:40.764104 kubelet[3421]: I0430 00:44:40.761659 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-219" podStartSLOduration=3.761633851 podStartE2EDuration="3.761633851s" podCreationTimestamp="2025-04-30 00:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:40.746682919 +0000 UTC m=+1.476436845" watchObservedRunningTime="2025-04-30 00:44:40.761633851 +0000 UTC m=+1.491387753" Apr 30 00:44:40.783555 kubelet[3421]: I0430 00:44:40.783280 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-219" podStartSLOduration=1.783259291 podStartE2EDuration="1.783259291s" podCreationTimestamp="2025-04-30 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:40.764667199 +0000 UTC m=+1.494421173" watchObservedRunningTime="2025-04-30 00:44:40.783259291 +0000 UTC m=+1.513013181" Apr 30 00:44:40.805876 kubelet[3421]: I0430 00:44:40.803885 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-219" podStartSLOduration=4.803859259 podStartE2EDuration="4.803859259s" podCreationTimestamp="2025-04-30 00:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:40.784710631 +0000 UTC m=+1.514464605" watchObservedRunningTime="2025-04-30 00:44:40.803859259 +0000 UTC m=+1.533613161" Apr 30 00:44:42.067480 kubelet[3421]: I0430 00:44:42.067044 3421 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:44:42.073935 containerd[2002]: time="2025-04-30T00:44:42.072803837Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:44:42.075007 kubelet[3421]: I0430 00:44:42.073483 3421 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:44:42.472992 kubelet[3421]: I0430 00:44:42.472714 3421 status_manager.go:890] "Failed to get status for pod" podUID="cc29d8a9-06f8-4da0-be6a-87187446406a" pod="kube-system/kube-proxy-zpbtw" err="pods \"kube-proxy-zpbtw\" is forbidden: User \"system:node:ip-172-31-18-219\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-219' and this object" Apr 30 00:44:42.475668 systemd[1]: Created slice kubepods-besteffort-podcc29d8a9_06f8_4da0_be6a_87187446406a.slice - libcontainer container kubepods-besteffort-podcc29d8a9_06f8_4da0_be6a_87187446406a.slice. Apr 30 00:44:42.476837 kubelet[3421]: W0430 00:44:42.475720 3421 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-219" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-219' and this object Apr 30 00:44:42.476837 kubelet[3421]: E0430 00:44:42.475898 3421 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-18-219\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-219' and this object" logger="UnhandledError" Apr 30 00:44:42.476837 kubelet[3421]: W0430 00:44:42.476242 3421 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-18-219" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-219' and this object Apr 30 00:44:42.476837 kubelet[3421]: E0430 00:44:42.476331 3421 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-18-219\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-219' and this object" logger="UnhandledError" Apr 30 00:44:42.534576 kubelet[3421]: I0430 00:44:42.533236 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-host-proc-sys-net\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.534576 kubelet[3421]: I0430 00:44:42.533304 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-hubble-tls\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.534576 kubelet[3421]: I0430 00:44:42.533347 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-host-proc-sys-kernel\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.534576 kubelet[3421]: I0430 00:44:42.533387 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc29d8a9-06f8-4da0-be6a-87187446406a-xtables-lock\") pod \"kube-proxy-zpbtw\" (UID: \"cc29d8a9-06f8-4da0-be6a-87187446406a\") " pod="kube-system/kube-proxy-zpbtw" Apr 30 00:44:42.534576 kubelet[3421]: I0430 00:44:42.533426 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwvj8\" (UniqueName: \"kubernetes.io/projected/cc29d8a9-06f8-4da0-be6a-87187446406a-kube-api-access-kwvj8\") pod \"kube-proxy-zpbtw\" (UID: \"cc29d8a9-06f8-4da0-be6a-87187446406a\") " pod="kube-system/kube-proxy-zpbtw" Apr 30 00:44:42.536674 kubelet[3421]: I0430 00:44:42.533468 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-bpf-maps\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.536674 kubelet[3421]: I0430 00:44:42.533505 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2509c74-c623-4e82-be5f-7a9691baa46e-clustermesh-secrets\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.536674 kubelet[3421]: I0430 00:44:42.533545 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2lt\" (UniqueName: \"kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-kube-api-access-9t2lt\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.536674 kubelet[3421]: I0430 00:44:42.533589 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cc29d8a9-06f8-4da0-be6a-87187446406a-kube-proxy\") pod \"kube-proxy-zpbtw\" (UID: \"cc29d8a9-06f8-4da0-be6a-87187446406a\") " pod="kube-system/kube-proxy-zpbtw" Apr 30 00:44:42.536674 kubelet[3421]: I0430 00:44:42.533660 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-run\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.536674 kubelet[3421]: I0430 00:44:42.533701 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cni-path\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.537002 kubelet[3421]: I0430 00:44:42.533781 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-lib-modules\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.537002 kubelet[3421]: I0430 00:44:42.533829 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-xtables-lock\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.537002 kubelet[3421]: I0430 00:44:42.533869 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-cgroup\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.537002 kubelet[3421]: I0430 00:44:42.533920 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc29d8a9-06f8-4da0-be6a-87187446406a-lib-modules\") pod \"kube-proxy-zpbtw\" (UID: \"cc29d8a9-06f8-4da0-be6a-87187446406a\") " pod="kube-system/kube-proxy-zpbtw" Apr 30 00:44:42.537002 kubelet[3421]: I0430 00:44:42.533972 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-hostproc\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.537002 kubelet[3421]: I0430 00:44:42.536011 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-etc-cni-netd\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.541060 kubelet[3421]: I0430 00:44:42.536129 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-config-path\") pod \"cilium-vf7ch\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " pod="kube-system/cilium-vf7ch" Apr 30 00:44:42.548245 systemd[1]: Created slice kubepods-burstable-podc2509c74_c623_4e82_be5f_7a9691baa46e.slice - libcontainer container kubepods-burstable-podc2509c74_c623_4e82_be5f_7a9691baa46e.slice. Apr 30 00:44:43.074467 systemd[1]: Created slice kubepods-besteffort-pod5b1ff640_be57_4642_a095_ff871e916abc.slice - libcontainer container kubepods-besteffort-pod5b1ff640_be57_4642_a095_ff871e916abc.slice. Apr 30 00:44:43.142250 kubelet[3421]: I0430 00:44:43.141916 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b1ff640-be57-4642-a095-ff871e916abc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-md8vg\" (UID: \"5b1ff640-be57-4642-a095-ff871e916abc\") " pod="kube-system/cilium-operator-6c4d7847fc-md8vg" Apr 30 00:44:43.142250 kubelet[3421]: I0430 00:44:43.142008 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxhr5\" (UniqueName: \"kubernetes.io/projected/5b1ff640-be57-4642-a095-ff871e916abc-kube-api-access-fxhr5\") pod \"cilium-operator-6c4d7847fc-md8vg\" (UID: \"5b1ff640-be57-4642-a095-ff871e916abc\") " pod="kube-system/cilium-operator-6c4d7847fc-md8vg" Apr 30 00:44:43.639847 kubelet[3421]: E0430 00:44:43.639774 3421 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:43.640022 kubelet[3421]: E0430 00:44:43.639913 3421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc29d8a9-06f8-4da0-be6a-87187446406a-kube-proxy podName:cc29d8a9-06f8-4da0-be6a-87187446406a nodeName:}" failed. No retries permitted until 2025-04-30 00:44:44.139877713 +0000 UTC m=+4.869631591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/cc29d8a9-06f8-4da0-be6a-87187446406a-kube-proxy") pod "kube-proxy-zpbtw" (UID: "cc29d8a9-06f8-4da0-be6a-87187446406a") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:43.708330 kubelet[3421]: E0430 00:44:43.707820 3421 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:43.708330 kubelet[3421]: E0430 00:44:43.707879 3421 projected.go:194] Error preparing data for projected volume kube-api-access-kwvj8 for pod kube-system/kube-proxy-zpbtw: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:43.708330 kubelet[3421]: E0430 00:44:43.707974 3421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc29d8a9-06f8-4da0-be6a-87187446406a-kube-api-access-kwvj8 podName:cc29d8a9-06f8-4da0-be6a-87187446406a nodeName:}" failed. No retries permitted until 2025-04-30 00:44:44.207943693 +0000 UTC m=+4.937697583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kwvj8" (UniqueName: "kubernetes.io/projected/cc29d8a9-06f8-4da0-be6a-87187446406a-kube-api-access-kwvj8") pod "kube-proxy-zpbtw" (UID: "cc29d8a9-06f8-4da0-be6a-87187446406a") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:43.740773 kubelet[3421]: E0430 00:44:43.740721 3421 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:43.741415 kubelet[3421]: E0430 00:44:43.740936 3421 projected.go:194] Error preparing data for projected volume kube-api-access-9t2lt for pod kube-system/cilium-vf7ch: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:43.741415 kubelet[3421]: E0430 00:44:43.741055 3421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-kube-api-access-9t2lt podName:c2509c74-c623-4e82-be5f-7a9691baa46e nodeName:}" failed. No retries permitted until 2025-04-30 00:44:44.24102773 +0000 UTC m=+4.970781632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9t2lt" (UniqueName: "kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-kube-api-access-9t2lt") pod "cilium-vf7ch" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:44.285806 containerd[2002]: time="2025-04-30T00:44:44.285738116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-md8vg,Uid:5b1ff640-be57-4642-a095-ff871e916abc,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:44.291018 containerd[2002]: time="2025-04-30T00:44:44.290944736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zpbtw,Uid:cc29d8a9-06f8-4da0-be6a-87187446406a,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:44.357277 containerd[2002]: time="2025-04-30T00:44:44.356938389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vf7ch,Uid:c2509c74-c623-4e82-be5f-7a9691baa46e,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:44.387819 containerd[2002]: time="2025-04-30T00:44:44.383248437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:44.387819 containerd[2002]: time="2025-04-30T00:44:44.383353713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:44.387819 containerd[2002]: time="2025-04-30T00:44:44.383417361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:44.387819 containerd[2002]: time="2025-04-30T00:44:44.383604861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:44.401773 containerd[2002]: time="2025-04-30T00:44:44.400958913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:44.401773 containerd[2002]: time="2025-04-30T00:44:44.401624241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:44.401773 containerd[2002]: time="2025-04-30T00:44:44.401671809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:44.402323 containerd[2002]: time="2025-04-30T00:44:44.401883033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:44.450446 systemd[1]: Started cri-containerd-f95890866308389b88a0501c9ae8e1bcfb789b17dc265ad01fc5cfeeab6202bd.scope - libcontainer container f95890866308389b88a0501c9ae8e1bcfb789b17dc265ad01fc5cfeeab6202bd. Apr 30 00:44:44.457798 containerd[2002]: time="2025-04-30T00:44:44.456434493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:44.457798 containerd[2002]: time="2025-04-30T00:44:44.456722301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:44.457798 containerd[2002]: time="2025-04-30T00:44:44.456754557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:44.457798 containerd[2002]: time="2025-04-30T00:44:44.457579125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:44.485420 systemd[1]: Started cri-containerd-b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b.scope - libcontainer container b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b. Apr 30 00:44:44.523522 systemd[1]: Started cri-containerd-d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5.scope - libcontainer container d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5. Apr 30 00:44:44.566753 containerd[2002]: time="2025-04-30T00:44:44.565000726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zpbtw,Uid:cc29d8a9-06f8-4da0-be6a-87187446406a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f95890866308389b88a0501c9ae8e1bcfb789b17dc265ad01fc5cfeeab6202bd\"" Apr 30 00:44:44.582586 containerd[2002]: time="2025-04-30T00:44:44.582486934Z" level=info msg="CreateContainer within sandbox \"f95890866308389b88a0501c9ae8e1bcfb789b17dc265ad01fc5cfeeab6202bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:44:44.660033 containerd[2002]: time="2025-04-30T00:44:44.659821762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vf7ch,Uid:c2509c74-c623-4e82-be5f-7a9691baa46e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\"" Apr 30 00:44:44.665687 containerd[2002]: time="2025-04-30T00:44:44.665625322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-md8vg,Uid:5b1ff640-be57-4642-a095-ff871e916abc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\"" Apr 30 00:44:44.669273 containerd[2002]: time="2025-04-30T00:44:44.669221338Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:44:44.673566 containerd[2002]: time="2025-04-30T00:44:44.673272010Z" level=info msg="CreateContainer within sandbox \"f95890866308389b88a0501c9ae8e1bcfb789b17dc265ad01fc5cfeeab6202bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68e2e987ab0807bdab8d558879ef2a3853c98b96c6512e1490797c35f4026288\"" Apr 30 00:44:44.674871 containerd[2002]: time="2025-04-30T00:44:44.674386198Z" level=info msg="StartContainer for \"68e2e987ab0807bdab8d558879ef2a3853c98b96c6512e1490797c35f4026288\"" Apr 30 00:44:44.733748 systemd[1]: Started cri-containerd-68e2e987ab0807bdab8d558879ef2a3853c98b96c6512e1490797c35f4026288.scope - libcontainer container 68e2e987ab0807bdab8d558879ef2a3853c98b96c6512e1490797c35f4026288. Apr 30 00:44:44.798565 containerd[2002]: time="2025-04-30T00:44:44.798449399Z" level=info msg="StartContainer for \"68e2e987ab0807bdab8d558879ef2a3853c98b96c6512e1490797c35f4026288\" returns successfully" Apr 30 00:44:48.279344 kubelet[3421]: I0430 00:44:48.278705 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zpbtw" podStartSLOduration=6.278678304 podStartE2EDuration="6.278678304s" podCreationTimestamp="2025-04-30 00:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:45.741293076 +0000 UTC m=+6.471046978" watchObservedRunningTime="2025-04-30 00:44:48.278678304 +0000 UTC m=+9.008432194" Apr 30 00:44:49.699135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349753971.mount: Deactivated successfully. Apr 30 00:44:52.365058 containerd[2002]: time="2025-04-30T00:44:52.364152340Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:52.366605 containerd[2002]: time="2025-04-30T00:44:52.366119032Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 00:44:52.368957 containerd[2002]: time="2025-04-30T00:44:52.368825596Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:52.373886 containerd[2002]: time="2025-04-30T00:44:52.373656257Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.704191283s" Apr 30 00:44:52.373886 containerd[2002]: time="2025-04-30T00:44:52.373732361Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 00:44:52.378688 containerd[2002]: time="2025-04-30T00:44:52.376597085Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:44:52.386860 containerd[2002]: time="2025-04-30T00:44:52.386779301Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:44:52.425020 containerd[2002]: time="2025-04-30T00:44:52.424945229Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\"" Apr 30 00:44:52.427006 containerd[2002]: time="2025-04-30T00:44:52.426553457Z" level=info msg="StartContainer for \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\"" Apr 30 00:44:52.494434 systemd[1]: Started cri-containerd-77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1.scope - libcontainer container 77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1. Apr 30 00:44:52.560422 containerd[2002]: time="2025-04-30T00:44:52.560205497Z" level=info msg="StartContainer for \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\" returns successfully" Apr 30 00:44:52.584955 systemd[1]: cri-containerd-77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1.scope: Deactivated successfully. Apr 30 00:44:53.411511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1-rootfs.mount: Deactivated successfully. Apr 30 00:44:53.913273 containerd[2002]: time="2025-04-30T00:44:53.913150820Z" level=info msg="shim disconnected" id=77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1 namespace=k8s.io Apr 30 00:44:53.913273 containerd[2002]: time="2025-04-30T00:44:53.913264160Z" level=warning msg="cleaning up after shim disconnected" id=77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1 namespace=k8s.io Apr 30 00:44:53.914320 containerd[2002]: time="2025-04-30T00:44:53.913288724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:54.800111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031462165.mount: Deactivated successfully. Apr 30 00:44:54.822214 containerd[2002]: time="2025-04-30T00:44:54.821847393Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:44:54.918205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3391053493.mount: Deactivated successfully. Apr 30 00:44:54.924980 containerd[2002]: time="2025-04-30T00:44:54.924688881Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\"" Apr 30 00:44:54.927111 containerd[2002]: time="2025-04-30T00:44:54.926226489Z" level=info msg="StartContainer for \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\"" Apr 30 00:44:54.999554 systemd[1]: Started cri-containerd-0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e.scope - libcontainer container 0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e. Apr 30 00:44:55.065581 containerd[2002]: time="2025-04-30T00:44:55.065412894Z" level=info msg="StartContainer for \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\" returns successfully" Apr 30 00:44:55.094651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:44:55.095288 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:44:55.095403 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:44:55.105828 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:44:55.111669 systemd[1]: cri-containerd-0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e.scope: Deactivated successfully. Apr 30 00:44:55.166480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:44:55.222100 containerd[2002]: time="2025-04-30T00:44:55.221745151Z" level=info msg="shim disconnected" id=0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e namespace=k8s.io Apr 30 00:44:55.222100 containerd[2002]: time="2025-04-30T00:44:55.221826703Z" level=warning msg="cleaning up after shim disconnected" id=0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e namespace=k8s.io Apr 30 00:44:55.222100 containerd[2002]: time="2025-04-30T00:44:55.221846731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:55.264720 containerd[2002]: time="2025-04-30T00:44:55.263948311Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:44:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:44:55.749826 containerd[2002]: time="2025-04-30T00:44:55.749724969Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:55.754397 containerd[2002]: time="2025-04-30T00:44:55.754310613Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 00:44:55.758134 containerd[2002]: time="2025-04-30T00:44:55.756529773Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:55.758052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e-rootfs.mount: Deactivated successfully. Apr 30 00:44:55.762787 containerd[2002]: time="2025-04-30T00:44:55.762712281Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.385986268s" Apr 30 00:44:55.763054 containerd[2002]: time="2025-04-30T00:44:55.762790785Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 00:44:55.767945 containerd[2002]: time="2025-04-30T00:44:55.767874057Z" level=info msg="CreateContainer within sandbox \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:44:55.810290 containerd[2002]: time="2025-04-30T00:44:55.810052678Z" level=info msg="CreateContainer within sandbox \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\"" Apr 30 00:44:55.813973 containerd[2002]: time="2025-04-30T00:44:55.812612782Z" level=info msg="StartContainer for \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\"" Apr 30 00:44:55.837987 containerd[2002]: time="2025-04-30T00:44:55.837925150Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:44:55.913478 containerd[2002]: time="2025-04-30T00:44:55.913368898Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\"" Apr 30 00:44:55.914908 containerd[2002]: time="2025-04-30T00:44:55.914703766Z" level=info msg="StartContainer for \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\"" Apr 30 00:44:55.916611 systemd[1]: Started cri-containerd-5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074.scope - libcontainer container 5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074. Apr 30 00:44:55.995447 containerd[2002]: time="2025-04-30T00:44:55.994210966Z" level=info msg="StartContainer for \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\" returns successfully" Apr 30 00:44:55.996831 systemd[1]: Started cri-containerd-aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb.scope - libcontainer container aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb. Apr 30 00:44:56.071670 containerd[2002]: time="2025-04-30T00:44:56.070539775Z" level=info msg="StartContainer for \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\" returns successfully" Apr 30 00:44:56.078238 systemd[1]: cri-containerd-aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb.scope: Deactivated successfully. Apr 30 00:44:56.225274 containerd[2002]: time="2025-04-30T00:44:56.224851388Z" level=info msg="shim disconnected" id=aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb namespace=k8s.io Apr 30 00:44:56.225274 containerd[2002]: time="2025-04-30T00:44:56.224943368Z" level=warning msg="cleaning up after shim disconnected" id=aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb namespace=k8s.io Apr 30 00:44:56.225274 containerd[2002]: time="2025-04-30T00:44:56.224966168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:56.760954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756766868.mount: Deactivated successfully. Apr 30 00:44:56.841532 containerd[2002]: time="2025-04-30T00:44:56.841411811Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:44:56.881677 containerd[2002]: time="2025-04-30T00:44:56.881420411Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\"" Apr 30 00:44:56.886014 containerd[2002]: time="2025-04-30T00:44:56.884466731Z" level=info msg="StartContainer for \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\"" Apr 30 00:44:56.999420 systemd[1]: Started cri-containerd-c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a.scope - libcontainer container c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a. Apr 30 00:44:57.164747 containerd[2002]: time="2025-04-30T00:44:57.164612600Z" level=info msg="StartContainer for \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\" returns successfully" Apr 30 00:44:57.169970 systemd[1]: cri-containerd-c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a.scope: Deactivated successfully. Apr 30 00:44:57.234052 containerd[2002]: time="2025-04-30T00:44:57.233965029Z" level=info msg="shim disconnected" id=c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a namespace=k8s.io Apr 30 00:44:57.234052 containerd[2002]: time="2025-04-30T00:44:57.234043245Z" level=warning msg="cleaning up after shim disconnected" id=c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a namespace=k8s.io Apr 30 00:44:57.234764 containerd[2002]: time="2025-04-30T00:44:57.234084633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:44:57.759987 systemd[1]: run-containerd-runc-k8s.io-c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a-runc.8Xv74G.mount: Deactivated successfully. Apr 30 00:44:57.761242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a-rootfs.mount: Deactivated successfully. Apr 30 00:44:57.858892 containerd[2002]: time="2025-04-30T00:44:57.858665652Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:44:57.891449 containerd[2002]: time="2025-04-30T00:44:57.891215520Z" level=info msg="CreateContainer within sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\"" Apr 30 00:44:57.894848 containerd[2002]: time="2025-04-30T00:44:57.894538092Z" level=info msg="StartContainer for \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\"" Apr 30 00:44:57.974099 kubelet[3421]: I0430 00:44:57.973376 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-md8vg" podStartSLOduration=3.883296913 podStartE2EDuration="14.973350888s" podCreationTimestamp="2025-04-30 00:44:43 +0000 UTC" firstStartedPulling="2025-04-30 00:44:44.673799434 +0000 UTC m=+5.403553312" lastFinishedPulling="2025-04-30 00:44:55.763853397 +0000 UTC m=+16.493607287" observedRunningTime="2025-04-30 00:44:57.106515572 +0000 UTC m=+17.836269486" watchObservedRunningTime="2025-04-30 00:44:57.973350888 +0000 UTC m=+18.703104814" Apr 30 00:44:57.983441 systemd[1]: Started cri-containerd-457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f.scope - libcontainer container 457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f. Apr 30 00:44:58.081327 containerd[2002]: time="2025-04-30T00:44:58.079712313Z" level=info msg="StartContainer for \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\" returns successfully" Apr 30 00:44:58.273603 kubelet[3421]: I0430 00:44:58.273474 3421 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Apr 30 00:44:58.351321 systemd[1]: Created slice kubepods-burstable-poddeb4e6ed_21a4_47d3_b53c_4e1f3f3eec36.slice - libcontainer container kubepods-burstable-poddeb4e6ed_21a4_47d3_b53c_4e1f3f3eec36.slice. Apr 30 00:44:58.366357 kubelet[3421]: I0430 00:44:58.366286 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deb4e6ed-21a4-47d3-b53c-4e1f3f3eec36-config-volume\") pod \"coredns-668d6bf9bc-z4fdh\" (UID: \"deb4e6ed-21a4-47d3-b53c-4e1f3f3eec36\") " pod="kube-system/coredns-668d6bf9bc-z4fdh" Apr 30 00:44:58.366357 kubelet[3421]: I0430 00:44:58.366360 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5432b590-cc37-4574-859c-5f91257e4281-config-volume\") pod \"coredns-668d6bf9bc-vfssk\" (UID: \"5432b590-cc37-4574-859c-5f91257e4281\") " pod="kube-system/coredns-668d6bf9bc-vfssk" Apr 30 00:44:58.366599 kubelet[3421]: I0430 00:44:58.366419 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcl7f\" (UniqueName: \"kubernetes.io/projected/deb4e6ed-21a4-47d3-b53c-4e1f3f3eec36-kube-api-access-tcl7f\") pod \"coredns-668d6bf9bc-z4fdh\" (UID: \"deb4e6ed-21a4-47d3-b53c-4e1f3f3eec36\") " pod="kube-system/coredns-668d6bf9bc-z4fdh" Apr 30 00:44:58.366599 kubelet[3421]: I0430 00:44:58.366484 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k44fp\" (UniqueName: \"kubernetes.io/projected/5432b590-cc37-4574-859c-5f91257e4281-kube-api-access-k44fp\") pod \"coredns-668d6bf9bc-vfssk\" (UID: \"5432b590-cc37-4574-859c-5f91257e4281\") " pod="kube-system/coredns-668d6bf9bc-vfssk" Apr 30 00:44:58.375643 systemd[1]: Created slice kubepods-burstable-pod5432b590_cc37_4574_859c_5f91257e4281.slice - libcontainer container kubepods-burstable-pod5432b590_cc37_4574_859c_5f91257e4281.slice. Apr 30 00:44:58.663805 containerd[2002]: time="2025-04-30T00:44:58.663536736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z4fdh,Uid:deb4e6ed-21a4-47d3-b53c-4e1f3f3eec36,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:58.685518 containerd[2002]: time="2025-04-30T00:44:58.685182012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vfssk,Uid:5432b590-cc37-4574-859c-5f91257e4281,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:58.919959 kubelet[3421]: I0430 00:44:58.919215 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vf7ch" podStartSLOduration=9.209922818 podStartE2EDuration="16.919177429s" podCreationTimestamp="2025-04-30 00:44:42 +0000 UTC" firstStartedPulling="2025-04-30 00:44:44.667025926 +0000 UTC m=+5.396779816" lastFinishedPulling="2025-04-30 00:44:52.376280537 +0000 UTC m=+13.106034427" observedRunningTime="2025-04-30 00:44:58.916279381 +0000 UTC m=+19.646033295" watchObservedRunningTime="2025-04-30 00:44:58.919177429 +0000 UTC m=+19.648931319" Apr 30 00:45:01.132389 (udev-worker)[4193]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:01.135678 systemd-networkd[1930]: cilium_host: Link UP Apr 30 00:45:01.137705 systemd-networkd[1930]: cilium_net: Link UP Apr 30 00:45:01.148011 (udev-worker)[4195]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:01.148131 systemd-networkd[1930]: cilium_net: Gained carrier Apr 30 00:45:01.148685 systemd-networkd[1930]: cilium_host: Gained carrier Apr 30 00:45:01.149032 systemd-networkd[1930]: cilium_net: Gained IPv6LL Apr 30 00:45:01.157537 systemd-networkd[1930]: cilium_host: Gained IPv6LL Apr 30 00:45:01.348772 systemd-networkd[1930]: cilium_vxlan: Link UP Apr 30 00:45:01.348797 systemd-networkd[1930]: cilium_vxlan: Gained carrier Apr 30 00:45:01.875277 kernel: NET: Registered PF_ALG protocol family Apr 30 00:45:02.572446 systemd-networkd[1930]: cilium_vxlan: Gained IPv6LL Apr 30 00:45:03.586738 systemd-networkd[1930]: lxc_health: Link UP Apr 30 00:45:03.590212 systemd-networkd[1930]: lxc_health: Gained carrier Apr 30 00:45:04.251122 kernel: eth0: renamed from tmp5ab81 Apr 30 00:45:04.256340 systemd-networkd[1930]: lxc7236880dd9d8: Link UP Apr 30 00:45:04.262767 systemd-networkd[1930]: lxc7236880dd9d8: Gained carrier Apr 30 00:45:04.297745 systemd-networkd[1930]: lxcc699d2310c9d: Link UP Apr 30 00:45:04.308127 kernel: eth0: renamed from tmp2fd64 Apr 30 00:45:04.316215 systemd-networkd[1930]: lxcc699d2310c9d: Gained carrier Apr 30 00:45:04.683258 systemd-networkd[1930]: lxc_health: Gained IPv6LL Apr 30 00:45:05.643329 systemd-networkd[1930]: lxc7236880dd9d8: Gained IPv6LL Apr 30 00:45:05.707358 systemd-networkd[1930]: lxcc699d2310c9d: Gained IPv6LL Apr 30 00:45:07.073504 systemd[1]: run-containerd-runc-k8s.io-457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f-runc.SaSpZy.mount: Deactivated successfully. Apr 30 00:45:07.196896 kubelet[3421]: E0430 00:45:07.196832 3421 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59878->127.0.0.1:40219: write tcp 127.0.0.1:59878->127.0.0.1:40219: write: broken pipe Apr 30 00:45:08.191002 ntpd[1984]: Listen normally on 8 cilium_host 192.168.0.181:123 Apr 30 00:45:08.193375 ntpd[1984]: 30 Apr 00:45:08 ntpd[1984]: Listen normally on 8 cilium_host 192.168.0.181:123 Apr 30 00:45:08.193375 ntpd[1984]: 30 Apr 00:45:08 ntpd[1984]: Listen normally on 9 cilium_net [fe80::e4c0:27ff:fecc:5bfd%4]:123 Apr 30 00:45:08.193375 ntpd[1984]: 30 Apr 00:45:08 ntpd[1984]: Listen normally on 10 cilium_host [fe80::e820:e6ff:fefb:d9fb%5]:123 Apr 30 00:45:08.193375 ntpd[1984]: 30 Apr 00:45:08 ntpd[1984]: Listen normally on 11 cilium_vxlan [fe80::40bc:12ff:fed4:c8df%6]:123 Apr 30 00:45:08.193375 ntpd[1984]: 30 Apr 00:45:08 ntpd[1984]: Listen normally on 12 lxc_health [fe80::b805:9fff:fe63:ee19%8]:123 Apr 30 00:45:08.193375 ntpd[1984]: 30 Apr 00:45:08 ntpd[1984]: Listen normally on 13 lxc7236880dd9d8 [fe80::4a3:1cff:fe78:7ad5%10]:123 Apr 30 00:45:08.193375 ntpd[1984]: 30 Apr 00:45:08 ntpd[1984]: Listen normally on 14 lxcc699d2310c9d [fe80::84dd:8aff:fed6:1546%12]:123 Apr 30 00:45:08.192848 ntpd[1984]: Listen normally on 9 cilium_net [fe80::e4c0:27ff:fecc:5bfd%4]:123 Apr 30 00:45:08.192938 ntpd[1984]: Listen normally on 10 cilium_host [fe80::e820:e6ff:fefb:d9fb%5]:123 Apr 30 00:45:08.193013 ntpd[1984]: Listen normally on 11 cilium_vxlan [fe80::40bc:12ff:fed4:c8df%6]:123 Apr 30 00:45:08.193126 ntpd[1984]: Listen normally on 12 lxc_health [fe80::b805:9fff:fe63:ee19%8]:123 Apr 30 00:45:08.193202 ntpd[1984]: Listen normally on 13 lxc7236880dd9d8 [fe80::4a3:1cff:fe78:7ad5%10]:123 Apr 30 00:45:08.193272 ntpd[1984]: Listen normally on 14 lxcc699d2310c9d [fe80::84dd:8aff:fed6:1546%12]:123 Apr 30 00:45:09.369205 systemd[1]: run-containerd-runc-k8s.io-457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f-runc.wpihMf.mount: Deactivated successfully. Apr 30 00:45:09.746284 systemd[1]: run-containerd-runc-k8s.io-457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f-runc.aifQYm.mount: Deactivated successfully. Apr 30 00:45:10.853877 sudo[2359]: pam_unix(sudo:session): session closed for user root Apr 30 00:45:10.894417 sshd[2356]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:10.902538 systemd-logind[1991]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:45:10.904518 systemd[1]: sshd@8-172.31.18.219:22-147.75.109.163:51564.service: Deactivated successfully. Apr 30 00:45:10.913833 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:45:10.914806 systemd[1]: session-9.scope: Consumed 10.556s CPU time, 152.1M memory peak, 0B memory swap peak. Apr 30 00:45:10.917134 systemd-logind[1991]: Removed session 9. Apr 30 00:45:14.381878 containerd[2002]: time="2025-04-30T00:45:14.379774058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:14.381878 containerd[2002]: time="2025-04-30T00:45:14.379920266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:14.381878 containerd[2002]: time="2025-04-30T00:45:14.379980026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:14.381878 containerd[2002]: time="2025-04-30T00:45:14.380299022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:14.448125 containerd[2002]: time="2025-04-30T00:45:14.447684002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:14.448125 containerd[2002]: time="2025-04-30T00:45:14.447926330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:14.448125 containerd[2002]: time="2025-04-30T00:45:14.448054670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:14.452139 containerd[2002]: time="2025-04-30T00:45:14.448632038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:14.509634 systemd[1]: Started cri-containerd-5ab81b00bf2ec163e28d8d5956fef14b181864dfeaeff739458f359bdf23979b.scope - libcontainer container 5ab81b00bf2ec163e28d8d5956fef14b181864dfeaeff739458f359bdf23979b. Apr 30 00:45:14.532449 systemd[1]: Started cri-containerd-2fd644e867586ed3a15ac74340dd9d234908e7ebbd585491cbaec3ff05b3bb61.scope - libcontainer container 2fd644e867586ed3a15ac74340dd9d234908e7ebbd585491cbaec3ff05b3bb61. Apr 30 00:45:14.667591 containerd[2002]: time="2025-04-30T00:45:14.665395575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vfssk,Uid:5432b590-cc37-4574-859c-5f91257e4281,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fd644e867586ed3a15ac74340dd9d234908e7ebbd585491cbaec3ff05b3bb61\"" Apr 30 00:45:14.679004 containerd[2002]: time="2025-04-30T00:45:14.678577947Z" level=info msg="CreateContainer within sandbox \"2fd644e867586ed3a15ac74340dd9d234908e7ebbd585491cbaec3ff05b3bb61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:45:14.682316 containerd[2002]: time="2025-04-30T00:45:14.682245483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z4fdh,Uid:deb4e6ed-21a4-47d3-b53c-4e1f3f3eec36,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ab81b00bf2ec163e28d8d5956fef14b181864dfeaeff739458f359bdf23979b\"" Apr 30 00:45:14.697230 containerd[2002]: time="2025-04-30T00:45:14.696945327Z" level=info msg="CreateContainer within sandbox \"5ab81b00bf2ec163e28d8d5956fef14b181864dfeaeff739458f359bdf23979b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:45:14.745115 containerd[2002]: time="2025-04-30T00:45:14.744589588Z" level=info msg="CreateContainer within sandbox \"2fd644e867586ed3a15ac74340dd9d234908e7ebbd585491cbaec3ff05b3bb61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6a26eef2c30de54f37ce15725052bc5e8b2c2f47dae496cbe8f5116e050c5f6\"" Apr 30 00:45:14.746877 containerd[2002]: time="2025-04-30T00:45:14.746771728Z" level=info msg="CreateContainer within sandbox \"5ab81b00bf2ec163e28d8d5956fef14b181864dfeaeff739458f359bdf23979b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fb3f9f26b25cf1b41cdc98dada422859c8835cb2b6a255c3d7ebc4246288289\"" Apr 30 00:45:14.748023 containerd[2002]: time="2025-04-30T00:45:14.747967912Z" level=info msg="StartContainer for \"e6a26eef2c30de54f37ce15725052bc5e8b2c2f47dae496cbe8f5116e050c5f6\"" Apr 30 00:45:14.750143 containerd[2002]: time="2025-04-30T00:45:14.748438336Z" level=info msg="StartContainer for \"5fb3f9f26b25cf1b41cdc98dada422859c8835cb2b6a255c3d7ebc4246288289\"" Apr 30 00:45:14.849128 systemd[1]: Started cri-containerd-5fb3f9f26b25cf1b41cdc98dada422859c8835cb2b6a255c3d7ebc4246288289.scope - libcontainer container 5fb3f9f26b25cf1b41cdc98dada422859c8835cb2b6a255c3d7ebc4246288289. Apr 30 00:45:14.872680 systemd[1]: Started cri-containerd-e6a26eef2c30de54f37ce15725052bc5e8b2c2f47dae496cbe8f5116e050c5f6.scope - libcontainer container e6a26eef2c30de54f37ce15725052bc5e8b2c2f47dae496cbe8f5116e050c5f6. Apr 30 00:45:14.980831 containerd[2002]: time="2025-04-30T00:45:14.979749005Z" level=info msg="StartContainer for \"5fb3f9f26b25cf1b41cdc98dada422859c8835cb2b6a255c3d7ebc4246288289\" returns successfully" Apr 30 00:45:14.995816 containerd[2002]: time="2025-04-30T00:45:14.995602289Z" level=info msg="StartContainer for \"e6a26eef2c30de54f37ce15725052bc5e8b2c2f47dae496cbe8f5116e050c5f6\" returns successfully" Apr 30 00:45:16.032862 kubelet[3421]: I0430 00:45:16.032664 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z4fdh" podStartSLOduration=33.032636702 podStartE2EDuration="33.032636702s" podCreationTimestamp="2025-04-30 00:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:16.004930082 +0000 UTC m=+36.734683996" watchObservedRunningTime="2025-04-30 00:45:16.032636702 +0000 UTC m=+36.762390580" Apr 30 00:45:55.132627 systemd[1]: Started sshd@9-172.31.18.219:22-147.75.109.163:55320.service - OpenSSH per-connection server daemon (147.75.109.163:55320). Apr 30 00:45:55.406360 sshd[4939]: Accepted publickey for core from 147.75.109.163 port 55320 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:55.409229 sshd[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:55.418191 systemd-logind[1991]: New session 10 of user core. Apr 30 00:45:55.426406 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:45:55.738733 sshd[4939]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:55.747166 systemd-logind[1991]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:45:55.749728 systemd[1]: sshd@9-172.31.18.219:22-147.75.109.163:55320.service: Deactivated successfully. Apr 30 00:45:55.754303 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:45:55.758789 systemd-logind[1991]: Removed session 10. Apr 30 00:46:00.795635 systemd[1]: Started sshd@10-172.31.18.219:22-147.75.109.163:58612.service - OpenSSH per-connection server daemon (147.75.109.163:58612). Apr 30 00:46:01.058209 sshd[4957]: Accepted publickey for core from 147.75.109.163 port 58612 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:01.062191 sshd[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:01.071936 systemd-logind[1991]: New session 11 of user core. Apr 30 00:46:01.081404 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:46:01.374638 sshd[4957]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:01.381913 systemd[1]: sshd@10-172.31.18.219:22-147.75.109.163:58612.service: Deactivated successfully. Apr 30 00:46:01.386683 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:46:01.389210 systemd-logind[1991]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:46:01.391226 systemd-logind[1991]: Removed session 11. Apr 30 00:46:06.435646 systemd[1]: Started sshd@11-172.31.18.219:22-147.75.109.163:58616.service - OpenSSH per-connection server daemon (147.75.109.163:58616). Apr 30 00:46:06.705783 sshd[4973]: Accepted publickey for core from 147.75.109.163 port 58616 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:06.708296 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:06.715719 systemd-logind[1991]: New session 12 of user core. Apr 30 00:46:06.728401 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:46:07.019361 sshd[4973]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:07.024748 systemd[1]: sshd@11-172.31.18.219:22-147.75.109.163:58616.service: Deactivated successfully. Apr 30 00:46:07.029378 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:46:07.032672 systemd-logind[1991]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:46:07.035294 systemd-logind[1991]: Removed session 12. Apr 30 00:46:12.081688 systemd[1]: Started sshd@12-172.31.18.219:22-147.75.109.163:60608.service - OpenSSH per-connection server daemon (147.75.109.163:60608). Apr 30 00:46:12.362616 sshd[4987]: Accepted publickey for core from 147.75.109.163 port 60608 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:12.366125 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:12.376193 systemd-logind[1991]: New session 13 of user core. Apr 30 00:46:12.385422 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:46:12.689784 sshd[4987]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:12.697353 systemd[1]: sshd@12-172.31.18.219:22-147.75.109.163:60608.service: Deactivated successfully. Apr 30 00:46:12.703519 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:46:12.706698 systemd-logind[1991]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:46:12.709311 systemd-logind[1991]: Removed session 13. Apr 30 00:46:12.747791 systemd[1]: Started sshd@13-172.31.18.219:22-147.75.109.163:60622.service - OpenSSH per-connection server daemon (147.75.109.163:60622). Apr 30 00:46:13.015693 sshd[5001]: Accepted publickey for core from 147.75.109.163 port 60622 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:13.019602 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:13.029898 systemd-logind[1991]: New session 14 of user core. Apr 30 00:46:13.039440 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:46:13.444204 sshd[5001]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:13.460483 systemd[1]: sshd@13-172.31.18.219:22-147.75.109.163:60622.service: Deactivated successfully. Apr 30 00:46:13.471051 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:46:13.476340 systemd-logind[1991]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:46:13.504695 systemd[1]: Started sshd@14-172.31.18.219:22-147.75.109.163:60626.service - OpenSSH per-connection server daemon (147.75.109.163:60626). Apr 30 00:46:13.507036 systemd-logind[1991]: Removed session 14. Apr 30 00:46:13.779636 sshd[5012]: Accepted publickey for core from 147.75.109.163 port 60626 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:13.782688 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:13.792622 systemd-logind[1991]: New session 15 of user core. Apr 30 00:46:13.801587 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:46:14.104482 sshd[5012]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:14.111441 systemd-logind[1991]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:46:14.111991 systemd[1]: sshd@14-172.31.18.219:22-147.75.109.163:60626.service: Deactivated successfully. Apr 30 00:46:14.117198 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:46:14.122413 systemd-logind[1991]: Removed session 15. Apr 30 00:46:19.157961 systemd[1]: Started sshd@15-172.31.18.219:22-147.75.109.163:59956.service - OpenSSH per-connection server daemon (147.75.109.163:59956). Apr 30 00:46:19.421192 sshd[5027]: Accepted publickey for core from 147.75.109.163 port 59956 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:19.424022 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:19.432304 systemd-logind[1991]: New session 16 of user core. Apr 30 00:46:19.442316 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:46:19.747055 sshd[5027]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:19.759730 systemd[1]: sshd@15-172.31.18.219:22-147.75.109.163:59956.service: Deactivated successfully. Apr 30 00:46:19.763852 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:46:19.766347 systemd-logind[1991]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:46:19.768946 systemd-logind[1991]: Removed session 16. Apr 30 00:46:24.804901 systemd[1]: Started sshd@16-172.31.18.219:22-147.75.109.163:59962.service - OpenSSH per-connection server daemon (147.75.109.163:59962). Apr 30 00:46:25.068867 sshd[5040]: Accepted publickey for core from 147.75.109.163 port 59962 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:25.072221 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:25.080654 systemd-logind[1991]: New session 17 of user core. Apr 30 00:46:25.088400 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:46:25.386319 sshd[5040]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:25.393523 systemd[1]: sshd@16-172.31.18.219:22-147.75.109.163:59962.service: Deactivated successfully. Apr 30 00:46:25.400114 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:46:25.403156 systemd-logind[1991]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:46:25.405191 systemd-logind[1991]: Removed session 17. Apr 30 00:46:30.440581 systemd[1]: Started sshd@17-172.31.18.219:22-147.75.109.163:41502.service - OpenSSH per-connection server daemon (147.75.109.163:41502). Apr 30 00:46:30.708528 sshd[5053]: Accepted publickey for core from 147.75.109.163 port 41502 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:30.711420 sshd[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:30.719179 systemd-logind[1991]: New session 18 of user core. Apr 30 00:46:30.729434 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:46:31.022441 sshd[5053]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:31.028431 systemd-logind[1991]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:46:31.031124 systemd[1]: sshd@17-172.31.18.219:22-147.75.109.163:41502.service: Deactivated successfully. Apr 30 00:46:31.035152 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:46:31.037601 systemd-logind[1991]: Removed session 18. Apr 30 00:46:31.074626 systemd[1]: Started sshd@18-172.31.18.219:22-147.75.109.163:41506.service - OpenSSH per-connection server daemon (147.75.109.163:41506). Apr 30 00:46:31.340999 sshd[5065]: Accepted publickey for core from 147.75.109.163 port 41506 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:31.343649 sshd[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:31.352248 systemd-logind[1991]: New session 19 of user core. Apr 30 00:46:31.360390 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:46:31.777519 sshd[5065]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:31.782674 systemd[1]: sshd@18-172.31.18.219:22-147.75.109.163:41506.service: Deactivated successfully. Apr 30 00:46:31.786628 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:46:31.790310 systemd-logind[1991]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:46:31.795576 systemd-logind[1991]: Removed session 19. Apr 30 00:46:31.834607 systemd[1]: Started sshd@19-172.31.18.219:22-147.75.109.163:41510.service - OpenSSH per-connection server daemon (147.75.109.163:41510). Apr 30 00:46:32.103865 sshd[5076]: Accepted publickey for core from 147.75.109.163 port 41510 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:32.106743 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:32.115448 systemd-logind[1991]: New session 20 of user core. Apr 30 00:46:32.125398 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:46:33.405833 sshd[5076]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:33.416841 systemd[1]: sshd@19-172.31.18.219:22-147.75.109.163:41510.service: Deactivated successfully. Apr 30 00:46:33.424848 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:46:33.428032 systemd-logind[1991]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:46:33.433356 systemd-logind[1991]: Removed session 20. Apr 30 00:46:33.464618 systemd[1]: Started sshd@20-172.31.18.219:22-147.75.109.163:41524.service - OpenSSH per-connection server daemon (147.75.109.163:41524). Apr 30 00:46:33.723580 sshd[5095]: Accepted publickey for core from 147.75.109.163 port 41524 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:33.725745 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:33.733431 systemd-logind[1991]: New session 21 of user core. Apr 30 00:46:33.742622 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:46:34.263034 sshd[5095]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:34.272454 systemd[1]: sshd@20-172.31.18.219:22-147.75.109.163:41524.service: Deactivated successfully. Apr 30 00:46:34.277218 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:46:34.279385 systemd-logind[1991]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:46:34.281406 systemd-logind[1991]: Removed session 21. Apr 30 00:46:34.322764 systemd[1]: Started sshd@21-172.31.18.219:22-147.75.109.163:41526.service - OpenSSH per-connection server daemon (147.75.109.163:41526). Apr 30 00:46:34.584461 sshd[5106]: Accepted publickey for core from 147.75.109.163 port 41526 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:34.587189 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:34.597560 systemd-logind[1991]: New session 22 of user core. Apr 30 00:46:34.604417 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:46:34.913462 sshd[5106]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:34.919565 systemd[1]: sshd@21-172.31.18.219:22-147.75.109.163:41526.service: Deactivated successfully. Apr 30 00:46:34.925409 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:46:34.927317 systemd-logind[1991]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:46:34.929231 systemd-logind[1991]: Removed session 22. Apr 30 00:46:39.970627 systemd[1]: Started sshd@22-172.31.18.219:22-147.75.109.163:57254.service - OpenSSH per-connection server daemon (147.75.109.163:57254). Apr 30 00:46:40.235796 sshd[5121]: Accepted publickey for core from 147.75.109.163 port 57254 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:40.238322 sshd[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:40.247327 systemd-logind[1991]: New session 23 of user core. Apr 30 00:46:40.253374 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:46:40.546686 sshd[5121]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:40.553108 systemd[1]: sshd@22-172.31.18.219:22-147.75.109.163:57254.service: Deactivated successfully. Apr 30 00:46:40.556594 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:46:40.558362 systemd-logind[1991]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:46:40.560739 systemd-logind[1991]: Removed session 23. Apr 30 00:46:45.600673 systemd[1]: Started sshd@23-172.31.18.219:22-147.75.109.163:57262.service - OpenSSH per-connection server daemon (147.75.109.163:57262). Apr 30 00:46:45.883119 sshd[5138]: Accepted publickey for core from 147.75.109.163 port 57262 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:45.886189 sshd[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:45.895326 systemd-logind[1991]: New session 24 of user core. Apr 30 00:46:45.904404 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:46:46.210054 sshd[5138]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:46.215652 systemd[1]: sshd@23-172.31.18.219:22-147.75.109.163:57262.service: Deactivated successfully. Apr 30 00:46:46.220586 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:46:46.226035 systemd-logind[1991]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:46:46.229493 systemd-logind[1991]: Removed session 24. Apr 30 00:46:51.265597 systemd[1]: Started sshd@24-172.31.18.219:22-147.75.109.163:35540.service - OpenSSH per-connection server daemon (147.75.109.163:35540). Apr 30 00:46:51.525534 sshd[5151]: Accepted publickey for core from 147.75.109.163 port 35540 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:51.530401 sshd[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:51.538711 systemd-logind[1991]: New session 25 of user core. Apr 30 00:46:51.546349 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:46:51.838449 sshd[5151]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:51.845282 systemd[1]: sshd@24-172.31.18.219:22-147.75.109.163:35540.service: Deactivated successfully. Apr 30 00:46:51.849494 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:46:51.850944 systemd-logind[1991]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:46:51.852962 systemd-logind[1991]: Removed session 25. Apr 30 00:46:56.894607 systemd[1]: Started sshd@25-172.31.18.219:22-147.75.109.163:40776.service - OpenSSH per-connection server daemon (147.75.109.163:40776). Apr 30 00:46:57.155200 sshd[5164]: Accepted publickey for core from 147.75.109.163 port 40776 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:57.157834 sshd[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:57.166965 systemd-logind[1991]: New session 26 of user core. Apr 30 00:46:57.177349 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:46:57.465529 sshd[5164]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:57.471570 systemd-logind[1991]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:46:57.473034 systemd[1]: sshd@25-172.31.18.219:22-147.75.109.163:40776.service: Deactivated successfully. Apr 30 00:46:57.476833 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:46:57.482148 systemd-logind[1991]: Removed session 26. Apr 30 00:46:57.524741 systemd[1]: Started sshd@26-172.31.18.219:22-147.75.109.163:40788.service - OpenSSH per-connection server daemon (147.75.109.163:40788). Apr 30 00:46:57.780005 sshd[5177]: Accepted publickey for core from 147.75.109.163 port 40788 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:57.782736 sshd[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:57.791759 systemd-logind[1991]: New session 27 of user core. Apr 30 00:46:57.797635 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:47:00.497315 kubelet[3421]: I0430 00:47:00.497214 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vfssk" podStartSLOduration=137.497189013 podStartE2EDuration="2m17.497189013s" podCreationTimestamp="2025-04-30 00:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:16.076183646 +0000 UTC m=+36.805937548" watchObservedRunningTime="2025-04-30 00:47:00.497189013 +0000 UTC m=+141.226942891" Apr 30 00:47:00.546196 containerd[2002]: time="2025-04-30T00:47:00.546033849Z" level=info msg="StopContainer for \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\" with timeout 30 (s)" Apr 30 00:47:00.550845 containerd[2002]: time="2025-04-30T00:47:00.550643469Z" level=info msg="Stop container \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\" with signal terminated" Apr 30 00:47:00.571929 containerd[2002]: time="2025-04-30T00:47:00.571752873Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:47:00.576818 systemd[1]: cri-containerd-5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074.scope: Deactivated successfully. Apr 30 00:47:00.596210 containerd[2002]: time="2025-04-30T00:47:00.595620417Z" level=info msg="StopContainer for \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\" with timeout 2 (s)" Apr 30 00:47:00.596741 containerd[2002]: time="2025-04-30T00:47:00.596678937Z" level=info msg="Stop container \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\" with signal terminated" Apr 30 00:47:00.618247 systemd-networkd[1930]: lxc_health: Link DOWN Apr 30 00:47:00.618262 systemd-networkd[1930]: lxc_health: Lost carrier Apr 30 00:47:00.673970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074-rootfs.mount: Deactivated successfully. Apr 30 00:47:00.676802 systemd[1]: cri-containerd-457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f.scope: Deactivated successfully. Apr 30 00:47:00.677389 systemd[1]: cri-containerd-457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f.scope: Consumed 16.810s CPU time. Apr 30 00:47:00.690097 containerd[2002]: time="2025-04-30T00:47:00.688282342Z" level=info msg="shim disconnected" id=5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074 namespace=k8s.io Apr 30 00:47:00.690097 containerd[2002]: time="2025-04-30T00:47:00.688389262Z" level=warning msg="cleaning up after shim disconnected" id=5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074 namespace=k8s.io Apr 30 00:47:00.690097 containerd[2002]: time="2025-04-30T00:47:00.688417426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:00.731446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f-rootfs.mount: Deactivated successfully. Apr 30 00:47:00.732017 containerd[2002]: time="2025-04-30T00:47:00.731933242Z" level=info msg="StopContainer for \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\" returns successfully" Apr 30 00:47:00.734798 containerd[2002]: time="2025-04-30T00:47:00.734711398Z" level=info msg="StopPodSandbox for \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\"" Apr 30 00:47:00.734798 containerd[2002]: time="2025-04-30T00:47:00.734786554Z" level=info msg="Container to stop \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:47:00.741394 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b-shm.mount: Deactivated successfully. Apr 30 00:47:00.746000 containerd[2002]: time="2025-04-30T00:47:00.744925318Z" level=info msg="shim disconnected" id=457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f namespace=k8s.io Apr 30 00:47:00.746000 containerd[2002]: time="2025-04-30T00:47:00.745022722Z" level=warning msg="cleaning up after shim disconnected" id=457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f namespace=k8s.io Apr 30 00:47:00.746000 containerd[2002]: time="2025-04-30T00:47:00.745045042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:00.765027 systemd[1]: cri-containerd-b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b.scope: Deactivated successfully. Apr 30 00:47:00.797420 containerd[2002]: time="2025-04-30T00:47:00.796921954Z" level=info msg="StopContainer for \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\" returns successfully" Apr 30 00:47:00.799507 containerd[2002]: time="2025-04-30T00:47:00.798233326Z" level=info msg="StopPodSandbox for \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\"" Apr 30 00:47:00.799507 containerd[2002]: time="2025-04-30T00:47:00.798300994Z" level=info msg="Container to stop \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:47:00.799507 containerd[2002]: time="2025-04-30T00:47:00.798326182Z" level=info msg="Container to stop \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:47:00.799507 containerd[2002]: time="2025-04-30T00:47:00.798348766Z" level=info msg="Container to stop \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:47:00.799507 containerd[2002]: time="2025-04-30T00:47:00.798923074Z" level=info msg="Container to stop \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:47:00.799507 containerd[2002]: time="2025-04-30T00:47:00.798987310Z" level=info msg="Container to stop \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:47:00.803742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5-shm.mount: Deactivated successfully. Apr 30 00:47:00.822913 systemd[1]: cri-containerd-d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5.scope: Deactivated successfully. Apr 30 00:47:00.852118 containerd[2002]: time="2025-04-30T00:47:00.851698991Z" level=info msg="shim disconnected" id=b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b namespace=k8s.io Apr 30 00:47:00.852118 containerd[2002]: time="2025-04-30T00:47:00.851779739Z" level=warning msg="cleaning up after shim disconnected" id=b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b namespace=k8s.io Apr 30 00:47:00.852118 containerd[2002]: time="2025-04-30T00:47:00.851801987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:00.880695 containerd[2002]: time="2025-04-30T00:47:00.880626647Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:47:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:47:00.881353 containerd[2002]: time="2025-04-30T00:47:00.880644287Z" level=info msg="shim disconnected" id=d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5 namespace=k8s.io Apr 30 00:47:00.881353 containerd[2002]: time="2025-04-30T00:47:00.880824347Z" level=warning msg="cleaning up after shim disconnected" id=d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5 namespace=k8s.io Apr 30 00:47:00.881353 containerd[2002]: time="2025-04-30T00:47:00.880860431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:00.883613 containerd[2002]: time="2025-04-30T00:47:00.883204223Z" level=info msg="TearDown network for sandbox \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\" successfully" Apr 30 00:47:00.883613 containerd[2002]: time="2025-04-30T00:47:00.883260539Z" level=info msg="StopPodSandbox for \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\" returns successfully" Apr 30 00:47:00.923966 containerd[2002]: time="2025-04-30T00:47:00.923331323Z" level=info msg="TearDown network for sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" successfully" Apr 30 00:47:00.923966 containerd[2002]: time="2025-04-30T00:47:00.923382395Z" level=info msg="StopPodSandbox for \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" returns successfully" Apr 30 00:47:01.045047 kubelet[3421]: I0430 00:47:01.043594 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-config-path\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045047 kubelet[3421]: I0430 00:47:01.043726 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-lib-modules\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045047 kubelet[3421]: I0430 00:47:01.043776 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b1ff640-be57-4642-a095-ff871e916abc-cilium-config-path\") pod \"5b1ff640-be57-4642-a095-ff871e916abc\" (UID: \"5b1ff640-be57-4642-a095-ff871e916abc\") " Apr 30 00:47:01.045047 kubelet[3421]: I0430 00:47:01.043849 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-run\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045047 kubelet[3421]: I0430 00:47:01.043933 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxhr5\" (UniqueName: \"kubernetes.io/projected/5b1ff640-be57-4642-a095-ff871e916abc-kube-api-access-fxhr5\") pod \"5b1ff640-be57-4642-a095-ff871e916abc\" (UID: \"5b1ff640-be57-4642-a095-ff871e916abc\") " Apr 30 00:47:01.045047 kubelet[3421]: I0430 00:47:01.044017 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-host-proc-sys-net\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045518 kubelet[3421]: I0430 00:47:01.044110 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t2lt\" (UniqueName: \"kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-kube-api-access-9t2lt\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045518 kubelet[3421]: I0430 00:47:01.044149 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-cgroup\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045518 kubelet[3421]: I0430 00:47:01.044324 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-hubble-tls\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045518 kubelet[3421]: I0430 00:47:01.044486 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-host-proc-sys-kernel\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045518 kubelet[3421]: I0430 00:47:01.044526 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-etc-cni-netd\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045518 kubelet[3421]: I0430 00:47:01.044695 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2509c74-c623-4e82-be5f-7a9691baa46e-clustermesh-secrets\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045894 kubelet[3421]: I0430 00:47:01.044861 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-bpf-maps\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045894 kubelet[3421]: I0430 00:47:01.045029 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cni-path\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045894 kubelet[3421]: I0430 00:47:01.045126 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-hostproc\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.045894 kubelet[3421]: I0430 00:47:01.045170 3421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-xtables-lock\") pod \"c2509c74-c623-4e82-be5f-7a9691baa46e\" (UID: \"c2509c74-c623-4e82-be5f-7a9691baa46e\") " Apr 30 00:47:01.052285 kubelet[3421]: I0430 00:47:01.044231 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052285 kubelet[3421]: I0430 00:47:01.044397 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052285 kubelet[3421]: I0430 00:47:01.045351 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052285 kubelet[3421]: I0430 00:47:01.048014 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052285 kubelet[3421]: I0430 00:47:01.048385 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052593 kubelet[3421]: I0430 00:47:01.048928 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052593 kubelet[3421]: I0430 00:47:01.049025 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052593 kubelet[3421]: I0430 00:47:01.049472 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052593 kubelet[3421]: I0430 00:47:01.049544 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cni-path" (OuterVolumeSpecName: "cni-path") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.052593 kubelet[3421]: I0430 00:47:01.049583 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-hostproc" (OuterVolumeSpecName: "hostproc") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 30 00:47:01.070101 kubelet[3421]: I0430 00:47:01.068502 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b1ff640-be57-4642-a095-ff871e916abc-kube-api-access-fxhr5" (OuterVolumeSpecName: "kube-api-access-fxhr5") pod "5b1ff640-be57-4642-a095-ff871e916abc" (UID: "5b1ff640-be57-4642-a095-ff871e916abc"). InnerVolumeSpecName "kube-api-access-fxhr5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:47:01.071716 kubelet[3421]: I0430 00:47:01.070810 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b1ff640-be57-4642-a095-ff871e916abc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b1ff640-be57-4642-a095-ff871e916abc" (UID: "5b1ff640-be57-4642-a095-ff871e916abc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 00:47:01.072717 kubelet[3421]: I0430 00:47:01.072547 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:47:01.076206 kubelet[3421]: I0430 00:47:01.076094 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 30 00:47:01.076340 kubelet[3421]: I0430 00:47:01.076207 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2509c74-c623-4e82-be5f-7a9691baa46e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 30 00:47:01.076708 kubelet[3421]: I0430 00:47:01.076673 3421 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-kube-api-access-9t2lt" (OuterVolumeSpecName: "kube-api-access-9t2lt") pod "c2509c74-c623-4e82-be5f-7a9691baa46e" (UID: "c2509c74-c623-4e82-be5f-7a9691baa46e"). InnerVolumeSpecName "kube-api-access-9t2lt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 30 00:47:01.145928 kubelet[3421]: I0430 00:47:01.145884 3421 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-hubble-tls\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146180 kubelet[3421]: I0430 00:47:01.146155 3421 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-host-proc-sys-kernel\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146613 kubelet[3421]: I0430 00:47:01.146280 3421 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9t2lt\" (UniqueName: \"kubernetes.io/projected/c2509c74-c623-4e82-be5f-7a9691baa46e-kube-api-access-9t2lt\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146613 kubelet[3421]: I0430 00:47:01.146308 3421 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-cgroup\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146613 kubelet[3421]: I0430 00:47:01.146331 3421 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2509c74-c623-4e82-be5f-7a9691baa46e-clustermesh-secrets\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146613 kubelet[3421]: I0430 00:47:01.146355 3421 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-bpf-maps\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146613 kubelet[3421]: I0430 00:47:01.146375 3421 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-etc-cni-netd\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146613 kubelet[3421]: I0430 00:47:01.146395 3421 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cni-path\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146613 kubelet[3421]: I0430 00:47:01.146417 3421 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-hostproc\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.146613 kubelet[3421]: I0430 00:47:01.146453 3421 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-xtables-lock\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.147041 kubelet[3421]: I0430 00:47:01.146475 3421 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-config-path\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.147041 kubelet[3421]: I0430 00:47:01.146496 3421 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-lib-modules\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.147041 kubelet[3421]: I0430 00:47:01.146519 3421 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b1ff640-be57-4642-a095-ff871e916abc-cilium-config-path\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.147041 kubelet[3421]: I0430 00:47:01.146541 3421 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-cilium-run\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.147041 kubelet[3421]: I0430 00:47:01.146562 3421 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxhr5\" (UniqueName: \"kubernetes.io/projected/5b1ff640-be57-4642-a095-ff871e916abc-kube-api-access-fxhr5\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.147041 kubelet[3421]: I0430 00:47:01.146582 3421 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2509c74-c623-4e82-be5f-7a9691baa46e-host-proc-sys-net\") on node \"ip-172-31-18-219\" DevicePath \"\"" Apr 30 00:47:01.266889 kubelet[3421]: I0430 00:47:01.266452 3421 scope.go:117] "RemoveContainer" containerID="5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074" Apr 30 00:47:01.271389 containerd[2002]: time="2025-04-30T00:47:01.270992265Z" level=info msg="RemoveContainer for \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\"" Apr 30 00:47:01.282671 containerd[2002]: time="2025-04-30T00:47:01.282314469Z" level=info msg="RemoveContainer for \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\" returns successfully" Apr 30 00:47:01.283671 kubelet[3421]: I0430 00:47:01.283053 3421 scope.go:117] "RemoveContainer" containerID="5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074" Apr 30 00:47:01.283838 containerd[2002]: time="2025-04-30T00:47:01.283529121Z" level=error msg="ContainerStatus for \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\": not found" Apr 30 00:47:01.284393 kubelet[3421]: E0430 00:47:01.284197 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\": not found" containerID="5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074" Apr 30 00:47:01.284716 kubelet[3421]: I0430 00:47:01.284289 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074"} err="failed to get container status \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e3f20d2840b475a3568ff6ead740b935b649e9275e1077394baa66f80f9d074\": not found" Apr 30 00:47:01.284716 kubelet[3421]: I0430 00:47:01.284651 3421 scope.go:117] "RemoveContainer" containerID="457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f" Apr 30 00:47:01.285167 systemd[1]: Removed slice kubepods-besteffort-pod5b1ff640_be57_4642_a095_ff871e916abc.slice - libcontainer container kubepods-besteffort-pod5b1ff640_be57_4642_a095_ff871e916abc.slice. Apr 30 00:47:01.291937 containerd[2002]: time="2025-04-30T00:47:01.291386721Z" level=info msg="RemoveContainer for \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\"" Apr 30 00:47:01.299982 containerd[2002]: time="2025-04-30T00:47:01.299502033Z" level=info msg="RemoveContainer for \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\" returns successfully" Apr 30 00:47:01.303551 kubelet[3421]: I0430 00:47:01.303489 3421 scope.go:117] "RemoveContainer" containerID="c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a" Apr 30 00:47:01.307832 containerd[2002]: time="2025-04-30T00:47:01.307744845Z" level=info msg="RemoveContainer for \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\"" Apr 30 00:47:01.311567 systemd[1]: Removed slice kubepods-burstable-podc2509c74_c623_4e82_be5f_7a9691baa46e.slice - libcontainer container kubepods-burstable-podc2509c74_c623_4e82_be5f_7a9691baa46e.slice. Apr 30 00:47:01.311987 systemd[1]: kubepods-burstable-podc2509c74_c623_4e82_be5f_7a9691baa46e.slice: Consumed 16.977s CPU time. Apr 30 00:47:01.321939 containerd[2002]: time="2025-04-30T00:47:01.319521897Z" level=info msg="RemoveContainer for \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\" returns successfully" Apr 30 00:47:01.322149 kubelet[3421]: I0430 00:47:01.321608 3421 scope.go:117] "RemoveContainer" containerID="aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb" Apr 30 00:47:01.326933 containerd[2002]: time="2025-04-30T00:47:01.326869161Z" level=info msg="RemoveContainer for \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\"" Apr 30 00:47:01.334722 containerd[2002]: time="2025-04-30T00:47:01.334617309Z" level=info msg="RemoveContainer for \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\" returns successfully" Apr 30 00:47:01.335234 kubelet[3421]: I0430 00:47:01.335096 3421 scope.go:117] "RemoveContainer" containerID="0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e" Apr 30 00:47:01.340498 containerd[2002]: time="2025-04-30T00:47:01.340448265Z" level=info msg="RemoveContainer for \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\"" Apr 30 00:47:01.348441 containerd[2002]: time="2025-04-30T00:47:01.348324057Z" level=info msg="RemoveContainer for \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\" returns successfully" Apr 30 00:47:01.349135 kubelet[3421]: I0430 00:47:01.348976 3421 scope.go:117] "RemoveContainer" containerID="77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1" Apr 30 00:47:01.351841 containerd[2002]: time="2025-04-30T00:47:01.351423033Z" level=info msg="RemoveContainer for \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\"" Apr 30 00:47:01.357406 containerd[2002]: time="2025-04-30T00:47:01.357344481Z" level=info msg="RemoveContainer for \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\" returns successfully" Apr 30 00:47:01.357990 kubelet[3421]: I0430 00:47:01.357934 3421 scope.go:117] "RemoveContainer" containerID="457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f" Apr 30 00:47:01.358705 containerd[2002]: time="2025-04-30T00:47:01.358593993Z" level=error msg="ContainerStatus for \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\": not found" Apr 30 00:47:01.359319 kubelet[3421]: E0430 00:47:01.359101 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\": not found" containerID="457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f" Apr 30 00:47:01.359319 kubelet[3421]: I0430 00:47:01.359156 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f"} err="failed to get container status \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\": rpc error: code = NotFound desc = an error occurred when try to find container \"457d72f8723f0cfc9b0dff36f0147ca5e8b50ae3c1e37d569c5f2bcc763cc11f\": not found" Apr 30 00:47:01.359319 kubelet[3421]: I0430 00:47:01.359193 3421 scope.go:117] "RemoveContainer" containerID="c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a" Apr 30 00:47:01.359870 containerd[2002]: time="2025-04-30T00:47:01.359748477Z" level=error msg="ContainerStatus for \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\": not found" Apr 30 00:47:01.360096 kubelet[3421]: E0430 00:47:01.360027 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\": not found" containerID="c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a" Apr 30 00:47:01.360204 kubelet[3421]: I0430 00:47:01.360096 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a"} err="failed to get container status \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c91bf068c4a31ed1d0c6505675ab68b127f693c90f02b6a6c898def6196e716a\": not found" Apr 30 00:47:01.360204 kubelet[3421]: I0430 00:47:01.360131 3421 scope.go:117] "RemoveContainer" containerID="aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb" Apr 30 00:47:01.360506 containerd[2002]: time="2025-04-30T00:47:01.360435825Z" level=error msg="ContainerStatus for \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\": not found" Apr 30 00:47:01.360912 kubelet[3421]: E0430 00:47:01.360703 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\": not found" containerID="aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb" Apr 30 00:47:01.360912 kubelet[3421]: I0430 00:47:01.360747 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb"} err="failed to get container status \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"aee11fbd737ad57dd82adf11dae2727a2d0ef77a68ac2a81874b4eb34122f0eb\": not found" Apr 30 00:47:01.360912 kubelet[3421]: I0430 00:47:01.360781 3421 scope.go:117] "RemoveContainer" containerID="0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e" Apr 30 00:47:01.361447 containerd[2002]: time="2025-04-30T00:47:01.361377921Z" level=error msg="ContainerStatus for \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\": not found" Apr 30 00:47:01.361883 kubelet[3421]: E0430 00:47:01.361789 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\": not found" containerID="0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e" Apr 30 00:47:01.361961 kubelet[3421]: I0430 00:47:01.361884 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e"} err="failed to get container status \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d37a63b4ccb2ff9d4d546a92f8d42b8b8b1e19db2f2286ca6c9377b7741a02e\": not found" Apr 30 00:47:01.361961 kubelet[3421]: I0430 00:47:01.361939 3421 scope.go:117] "RemoveContainer" containerID="77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1" Apr 30 00:47:01.362445 containerd[2002]: time="2025-04-30T00:47:01.362382969Z" level=error msg="ContainerStatus for \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\": not found" Apr 30 00:47:01.362756 kubelet[3421]: E0430 00:47:01.362711 3421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\": not found" containerID="77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1" Apr 30 00:47:01.362854 kubelet[3421]: I0430 00:47:01.362766 3421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1"} err="failed to get container status \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"77ecf5a68d295c4f9ce947c0f379d15e00a0288d26fcfd17da104c87442115c1\": not found" Apr 30 00:47:01.536020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5-rootfs.mount: Deactivated successfully. Apr 30 00:47:01.536235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b-rootfs.mount: Deactivated successfully. Apr 30 00:47:01.536375 systemd[1]: var-lib-kubelet-pods-c2509c74\x2dc623\x2d4e82\x2dbe5f\x2d7a9691baa46e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9t2lt.mount: Deactivated successfully. Apr 30 00:47:01.536512 systemd[1]: var-lib-kubelet-pods-5b1ff640\x2dbe57\x2d4642\x2da095\x2dff871e916abc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxhr5.mount: Deactivated successfully. Apr 30 00:47:01.536664 systemd[1]: var-lib-kubelet-pods-c2509c74\x2dc623\x2d4e82\x2dbe5f\x2d7a9691baa46e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:47:01.536800 systemd[1]: var-lib-kubelet-pods-c2509c74\x2dc623\x2d4e82\x2dbe5f\x2d7a9691baa46e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:47:01.552448 kubelet[3421]: I0430 00:47:01.551960 3421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b1ff640-be57-4642-a095-ff871e916abc" path="/var/lib/kubelet/pods/5b1ff640-be57-4642-a095-ff871e916abc/volumes" Apr 30 00:47:01.554257 kubelet[3421]: I0430 00:47:01.553006 3421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2509c74-c623-4e82-be5f-7a9691baa46e" path="/var/lib/kubelet/pods/c2509c74-c623-4e82-be5f-7a9691baa46e/volumes" Apr 30 00:47:02.473721 sshd[5177]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:02.481673 systemd[1]: sshd@26-172.31.18.219:22-147.75.109.163:40788.service: Deactivated successfully. Apr 30 00:47:02.485639 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:47:02.487137 systemd[1]: session-27.scope: Consumed 1.956s CPU time. Apr 30 00:47:02.488299 systemd-logind[1991]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:47:02.491216 systemd-logind[1991]: Removed session 27. Apr 30 00:47:02.530581 systemd[1]: Started sshd@27-172.31.18.219:22-147.75.109.163:40796.service - OpenSSH per-connection server daemon (147.75.109.163:40796). Apr 30 00:47:02.802644 sshd[5343]: Accepted publickey for core from 147.75.109.163 port 40796 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:47:02.805617 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:02.818201 systemd-logind[1991]: New session 28 of user core. Apr 30 00:47:02.824378 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:47:03.191102 ntpd[1984]: Deleting interface #12 lxc_health, fe80::b805:9fff:fe63:ee19%8#123, interface stats: received=0, sent=0, dropped=0, active_time=115 secs Apr 30 00:47:03.191650 ntpd[1984]: 30 Apr 00:47:03 ntpd[1984]: Deleting interface #12 lxc_health, fe80::b805:9fff:fe63:ee19%8#123, interface stats: received=0, sent=0, dropped=0, active_time=115 secs Apr 30 00:47:04.584047 sshd[5343]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:04.585300 kubelet[3421]: I0430 00:47:04.584924 3421 memory_manager.go:355] "RemoveStaleState removing state" podUID="c2509c74-c623-4e82-be5f-7a9691baa46e" containerName="cilium-agent" Apr 30 00:47:04.585300 kubelet[3421]: I0430 00:47:04.584980 3421 memory_manager.go:355] "RemoveStaleState removing state" podUID="5b1ff640-be57-4642-a095-ff871e916abc" containerName="cilium-operator" Apr 30 00:47:04.601605 systemd[1]: sshd@27-172.31.18.219:22-147.75.109.163:40796.service: Deactivated successfully. Apr 30 00:47:04.608772 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:47:04.613910 systemd[1]: session-28.scope: Consumed 1.520s CPU time. Apr 30 00:47:04.622726 systemd-logind[1991]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:47:04.654668 systemd[1]: Started sshd@28-172.31.18.219:22-147.75.109.163:40800.service - OpenSSH per-connection server daemon (147.75.109.163:40800). Apr 30 00:47:04.659368 systemd-logind[1991]: Removed session 28. Apr 30 00:47:04.666985 systemd[1]: Created slice kubepods-burstable-podf98dde71_cc0d_4808_ba9f_dd4bdd81aa2a.slice - libcontainer container kubepods-burstable-podf98dde71_cc0d_4808_ba9f_dd4bdd81aa2a.slice. Apr 30 00:47:04.673108 kubelet[3421]: I0430 00:47:04.671643 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-xtables-lock\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.673427 kubelet[3421]: I0430 00:47:04.673386 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-host-proc-sys-kernel\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.673606 kubelet[3421]: I0430 00:47:04.673568 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-cni-path\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.673766 kubelet[3421]: I0430 00:47:04.673730 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-cilium-ipsec-secrets\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.673972 kubelet[3421]: I0430 00:47:04.673937 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-clustermesh-secrets\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.674257 kubelet[3421]: I0430 00:47:04.674131 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-cilium-cgroup\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.674257 kubelet[3421]: I0430 00:47:04.674211 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-hubble-tls\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.675155 kubelet[3421]: I0430 00:47:04.674648 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-lib-modules\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.675155 kubelet[3421]: I0430 00:47:04.674830 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-cilium-config-path\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.675979 kubelet[3421]: I0430 00:47:04.675023 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mw2p\" (UniqueName: \"kubernetes.io/projected/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-kube-api-access-6mw2p\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.677001 kubelet[3421]: I0430 00:47:04.676207 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-cilium-run\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.677001 kubelet[3421]: I0430 00:47:04.676828 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-etc-cni-netd\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.677001 kubelet[3421]: I0430 00:47:04.676884 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-host-proc-sys-net\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.677001 kubelet[3421]: I0430 00:47:04.676926 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-bpf-maps\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.677001 kubelet[3421]: I0430 00:47:04.676977 3421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a-hostproc\") pod \"cilium-5mttd\" (UID: \"f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a\") " pod="kube-system/cilium-5mttd" Apr 30 00:47:04.830090 kubelet[3421]: E0430 00:47:04.828615 3421 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:47:04.986022 containerd[2002]: time="2025-04-30T00:47:04.984482487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5mttd,Uid:f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a,Namespace:kube-system,Attempt:0,}" Apr 30 00:47:04.986609 sshd[5355]: Accepted publickey for core from 147.75.109.163 port 40800 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:47:04.987588 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:04.997708 systemd-logind[1991]: New session 29 of user core. Apr 30 00:47:05.004374 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 00:47:05.042229 containerd[2002]: time="2025-04-30T00:47:05.041963915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:47:05.042735 containerd[2002]: time="2025-04-30T00:47:05.042571979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:47:05.044889 containerd[2002]: time="2025-04-30T00:47:05.044429375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:05.044889 containerd[2002]: time="2025-04-30T00:47:05.044741039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:47:05.075422 systemd[1]: Started cri-containerd-e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17.scope - libcontainer container e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17. Apr 30 00:47:05.120252 containerd[2002]: time="2025-04-30T00:47:05.120028572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5mttd,Uid:f98dde71-cc0d-4808-ba9f-dd4bdd81aa2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\"" Apr 30 00:47:05.127212 containerd[2002]: time="2025-04-30T00:47:05.127155684Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:47:05.152948 containerd[2002]: time="2025-04-30T00:47:05.152863668Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84adb02fa84d6400b056b27bda847aa2825e694d6b784a580e2aef0893bc72c7\"" Apr 30 00:47:05.154887 containerd[2002]: time="2025-04-30T00:47:05.154791216Z" level=info msg="StartContainer for \"84adb02fa84d6400b056b27bda847aa2825e694d6b784a580e2aef0893bc72c7\"" Apr 30 00:47:05.181173 sshd[5355]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:05.192050 systemd[1]: sshd@28-172.31.18.219:22-147.75.109.163:40800.service: Deactivated successfully. Apr 30 00:47:05.200392 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 00:47:05.204452 systemd-logind[1991]: Session 29 logged out. Waiting for processes to exit. Apr 30 00:47:05.215548 systemd[1]: Started cri-containerd-84adb02fa84d6400b056b27bda847aa2825e694d6b784a580e2aef0893bc72c7.scope - libcontainer container 84adb02fa84d6400b056b27bda847aa2825e694d6b784a580e2aef0893bc72c7. Apr 30 00:47:05.217199 systemd-logind[1991]: Removed session 29. Apr 30 00:47:05.236271 systemd[1]: Started sshd@29-172.31.18.219:22-147.75.109.163:40804.service - OpenSSH per-connection server daemon (147.75.109.163:40804). Apr 30 00:47:05.290620 containerd[2002]: time="2025-04-30T00:47:05.290442469Z" level=info msg="StartContainer for \"84adb02fa84d6400b056b27bda847aa2825e694d6b784a580e2aef0893bc72c7\" returns successfully" Apr 30 00:47:05.310930 systemd[1]: cri-containerd-84adb02fa84d6400b056b27bda847aa2825e694d6b784a580e2aef0893bc72c7.scope: Deactivated successfully. Apr 30 00:47:05.378957 containerd[2002]: time="2025-04-30T00:47:05.378778813Z" level=info msg="shim disconnected" id=84adb02fa84d6400b056b27bda847aa2825e694d6b784a580e2aef0893bc72c7 namespace=k8s.io Apr 30 00:47:05.378957 containerd[2002]: time="2025-04-30T00:47:05.378951361Z" level=warning msg="cleaning up after shim disconnected" id=84adb02fa84d6400b056b27bda847aa2825e694d6b784a580e2aef0893bc72c7 namespace=k8s.io Apr 30 00:47:05.378957 containerd[2002]: time="2025-04-30T00:47:05.378973177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:05.520348 sshd[5431]: Accepted publickey for core from 147.75.109.163 port 40804 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:47:05.523098 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:47:05.533931 systemd-logind[1991]: New session 30 of user core. Apr 30 00:47:05.535800 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 30 00:47:06.314213 containerd[2002]: time="2025-04-30T00:47:06.313431302Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:47:06.339977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2466494225.mount: Deactivated successfully. Apr 30 00:47:06.342281 containerd[2002]: time="2025-04-30T00:47:06.342193154Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c\"" Apr 30 00:47:06.344140 containerd[2002]: time="2025-04-30T00:47:06.343868150Z" level=info msg="StartContainer for \"6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c\"" Apr 30 00:47:06.409395 systemd[1]: Started cri-containerd-6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c.scope - libcontainer container 6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c. Apr 30 00:47:06.457331 containerd[2002]: time="2025-04-30T00:47:06.457047279Z" level=info msg="StartContainer for \"6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c\" returns successfully" Apr 30 00:47:06.471014 systemd[1]: cri-containerd-6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c.scope: Deactivated successfully. Apr 30 00:47:06.515605 containerd[2002]: time="2025-04-30T00:47:06.515493471Z" level=info msg="shim disconnected" id=6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c namespace=k8s.io Apr 30 00:47:06.515605 containerd[2002]: time="2025-04-30T00:47:06.515567787Z" level=warning msg="cleaning up after shim disconnected" id=6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c namespace=k8s.io Apr 30 00:47:06.515605 containerd[2002]: time="2025-04-30T00:47:06.515590707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:06.790107 systemd[1]: run-containerd-runc-k8s.io-6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c-runc.kCJ1EB.mount: Deactivated successfully. Apr 30 00:47:06.790302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c721fbcd1a443e6b1d658e9250cafa96615df3e7bf7addb4996b4a50c006f9c-rootfs.mount: Deactivated successfully. Apr 30 00:47:07.320315 containerd[2002]: time="2025-04-30T00:47:07.319096635Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:47:07.357965 containerd[2002]: time="2025-04-30T00:47:07.357878667Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd\"" Apr 30 00:47:07.360965 containerd[2002]: time="2025-04-30T00:47:07.359382675Z" level=info msg="StartContainer for \"dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd\"" Apr 30 00:47:07.432370 systemd[1]: Started cri-containerd-dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd.scope - libcontainer container dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd. Apr 30 00:47:07.485496 containerd[2002]: time="2025-04-30T00:47:07.485415340Z" level=info msg="StartContainer for \"dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd\" returns successfully" Apr 30 00:47:07.491755 systemd[1]: cri-containerd-dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd.scope: Deactivated successfully. Apr 30 00:47:07.548211 containerd[2002]: time="2025-04-30T00:47:07.547930312Z" level=info msg="shim disconnected" id=dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd namespace=k8s.io Apr 30 00:47:07.548211 containerd[2002]: time="2025-04-30T00:47:07.548013304Z" level=warning msg="cleaning up after shim disconnected" id=dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd namespace=k8s.io Apr 30 00:47:07.548211 containerd[2002]: time="2025-04-30T00:47:07.548033668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:07.582718 containerd[2002]: time="2025-04-30T00:47:07.582228280Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:47:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:47:07.791568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dffaf025eb77cb7d9faa926b6dbaca51f4b3b0ddd66d7b205b46fb9f6ddfcecd-rootfs.mount: Deactivated successfully. Apr 30 00:47:08.328939 containerd[2002]: time="2025-04-30T00:47:08.328874272Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:47:08.361887 containerd[2002]: time="2025-04-30T00:47:08.361771336Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c\"" Apr 30 00:47:08.363300 containerd[2002]: time="2025-04-30T00:47:08.363242116Z" level=info msg="StartContainer for \"bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c\"" Apr 30 00:47:08.417409 systemd[1]: Started cri-containerd-bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c.scope - libcontainer container bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c. Apr 30 00:47:08.459967 systemd[1]: cri-containerd-bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c.scope: Deactivated successfully. Apr 30 00:47:08.463579 containerd[2002]: time="2025-04-30T00:47:08.461825980Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf98dde71_cc0d_4808_ba9f_dd4bdd81aa2a.slice/cri-containerd-bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c.scope/memory.events\": no such file or directory" Apr 30 00:47:08.469034 containerd[2002]: time="2025-04-30T00:47:08.468961685Z" level=info msg="StartContainer for \"bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c\" returns successfully" Apr 30 00:47:08.515945 containerd[2002]: time="2025-04-30T00:47:08.515788289Z" level=info msg="shim disconnected" id=bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c namespace=k8s.io Apr 30 00:47:08.515945 containerd[2002]: time="2025-04-30T00:47:08.515937977Z" level=warning msg="cleaning up after shim disconnected" id=bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c namespace=k8s.io Apr 30 00:47:08.518312 containerd[2002]: time="2025-04-30T00:47:08.515960309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:08.792481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfaaa690888f15cff0c20e3bdc48ee5ccea9a839cfde228371ef70d5904cd19c-rootfs.mount: Deactivated successfully. Apr 30 00:47:09.336235 containerd[2002]: time="2025-04-30T00:47:09.335941385Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:47:09.372508 containerd[2002]: time="2025-04-30T00:47:09.372438401Z" level=info msg="CreateContainer within sandbox \"e7ab05d128507530d830ec682d541ada36f1781d2b2b1cb1b51260e41e17dc17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88f7d01c06248375a8d324a8e13e55d1356ef8da97823d592e4daa7292762d9d\"" Apr 30 00:47:09.373943 containerd[2002]: time="2025-04-30T00:47:09.373864193Z" level=info msg="StartContainer for \"88f7d01c06248375a8d324a8e13e55d1356ef8da97823d592e4daa7292762d9d\"" Apr 30 00:47:09.432392 systemd[1]: Started cri-containerd-88f7d01c06248375a8d324a8e13e55d1356ef8da97823d592e4daa7292762d9d.scope - libcontainer container 88f7d01c06248375a8d324a8e13e55d1356ef8da97823d592e4daa7292762d9d. Apr 30 00:47:09.487083 containerd[2002]: time="2025-04-30T00:47:09.486986670Z" level=info msg="StartContainer for \"88f7d01c06248375a8d324a8e13e55d1356ef8da97823d592e4daa7292762d9d\" returns successfully" Apr 30 00:47:10.303150 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 00:47:14.486712 systemd[1]: run-containerd-runc-k8s.io-88f7d01c06248375a8d324a8e13e55d1356ef8da97823d592e4daa7292762d9d-runc.4bXF0M.mount: Deactivated successfully. Apr 30 00:47:14.728311 systemd-networkd[1930]: lxc_health: Link UP Apr 30 00:47:14.741956 (udev-worker)[6205]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:47:14.752596 systemd-networkd[1930]: lxc_health: Gained carrier Apr 30 00:47:15.028793 kubelet[3421]: I0430 00:47:15.027747 3421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5mttd" podStartSLOduration=11.027722145 podStartE2EDuration="11.027722145s" podCreationTimestamp="2025-04-30 00:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:47:10.378051198 +0000 UTC m=+151.107805100" watchObservedRunningTime="2025-04-30 00:47:15.027722145 +0000 UTC m=+155.757476035" Apr 30 00:47:16.771631 systemd[1]: run-containerd-runc-k8s.io-88f7d01c06248375a8d324a8e13e55d1356ef8da97823d592e4daa7292762d9d-runc.m4HJB7.mount: Deactivated successfully. Apr 30 00:47:16.780111 systemd-networkd[1930]: lxc_health: Gained IPv6LL Apr 30 00:47:19.191829 ntpd[1984]: Listen normally on 15 lxc_health [fe80::a009:97ff:fec8:728d%14]:123 Apr 30 00:47:19.192485 ntpd[1984]: 30 Apr 00:47:19 ntpd[1984]: Listen normally on 15 lxc_health [fe80::a009:97ff:fec8:728d%14]:123 Apr 30 00:47:21.343828 systemd[1]: run-containerd-runc-k8s.io-88f7d01c06248375a8d324a8e13e55d1356ef8da97823d592e4daa7292762d9d-runc.KGyqiM.mount: Deactivated successfully. Apr 30 00:47:21.478207 sshd[5431]: pam_unix(sshd:session): session closed for user core Apr 30 00:47:21.485909 systemd[1]: sshd@29-172.31.18.219:22-147.75.109.163:40804.service: Deactivated successfully. Apr 30 00:47:21.492682 systemd[1]: session-30.scope: Deactivated successfully. Apr 30 00:47:21.498984 systemd-logind[1991]: Session 30 logged out. Waiting for processes to exit. Apr 30 00:47:21.502714 systemd-logind[1991]: Removed session 30. Apr 30 00:47:36.010585 systemd[1]: cri-containerd-c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4.scope: Deactivated successfully. Apr 30 00:47:36.012243 systemd[1]: cri-containerd-c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4.scope: Consumed 6.632s CPU time, 19.9M memory peak, 0B memory swap peak. Apr 30 00:47:36.054156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4-rootfs.mount: Deactivated successfully. Apr 30 00:47:36.063209 containerd[2002]: time="2025-04-30T00:47:36.063001842Z" level=info msg="shim disconnected" id=c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4 namespace=k8s.io Apr 30 00:47:36.064543 containerd[2002]: time="2025-04-30T00:47:36.063297078Z" level=warning msg="cleaning up after shim disconnected" id=c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4 namespace=k8s.io Apr 30 00:47:36.064543 containerd[2002]: time="2025-04-30T00:47:36.063326802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:36.423186 kubelet[3421]: I0430 00:47:36.423031 3421 scope.go:117] "RemoveContainer" containerID="c0a73bf0a81a39c8f0b568b7c19a1d5b2c97976c74d3ba8cf976b325cd527ce4" Apr 30 00:47:36.426536 containerd[2002]: time="2025-04-30T00:47:36.426401731Z" level=info msg="CreateContainer within sandbox \"3667808a603251b69cdc0476d09730c4e20efa5eb44a0c808f52e65ba7562eb3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 00:47:36.454267 containerd[2002]: time="2025-04-30T00:47:36.454186484Z" level=info msg="CreateContainer within sandbox \"3667808a603251b69cdc0476d09730c4e20efa5eb44a0c808f52e65ba7562eb3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1e2426ccbb05ba7b202f4262c3e18099c74b247982bb061a1e3d490274e11bdb\"" Apr 30 00:47:36.454884 containerd[2002]: time="2025-04-30T00:47:36.454835984Z" level=info msg="StartContainer for \"1e2426ccbb05ba7b202f4262c3e18099c74b247982bb061a1e3d490274e11bdb\"" Apr 30 00:47:36.511478 systemd[1]: Started cri-containerd-1e2426ccbb05ba7b202f4262c3e18099c74b247982bb061a1e3d490274e11bdb.scope - libcontainer container 1e2426ccbb05ba7b202f4262c3e18099c74b247982bb061a1e3d490274e11bdb. Apr 30 00:47:36.588320 containerd[2002]: time="2025-04-30T00:47:36.588230036Z" level=info msg="StartContainer for \"1e2426ccbb05ba7b202f4262c3e18099c74b247982bb061a1e3d490274e11bdb\" returns successfully" Apr 30 00:47:39.587387 containerd[2002]: time="2025-04-30T00:47:39.587322719Z" level=info msg="StopPodSandbox for \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\"" Apr 30 00:47:39.588148 containerd[2002]: time="2025-04-30T00:47:39.587464019Z" level=info msg="TearDown network for sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" successfully" Apr 30 00:47:39.588148 containerd[2002]: time="2025-04-30T00:47:39.587489819Z" level=info msg="StopPodSandbox for \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" returns successfully" Apr 30 00:47:39.588148 containerd[2002]: time="2025-04-30T00:47:39.588816371Z" level=info msg="RemovePodSandbox for \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\"" Apr 30 00:47:39.589157 containerd[2002]: time="2025-04-30T00:47:39.588869135Z" level=info msg="Forcibly stopping sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\"" Apr 30 00:47:39.589157 containerd[2002]: time="2025-04-30T00:47:39.588977207Z" level=info msg="TearDown network for sandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" successfully" Apr 30 00:47:39.596220 containerd[2002]: time="2025-04-30T00:47:39.596146751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:47:39.596386 containerd[2002]: time="2025-04-30T00:47:39.596246603Z" level=info msg="RemovePodSandbox \"d00105786298b963f56eb0853c0a548d5ebe1b15ce46e837f3660a171b7b24a5\" returns successfully" Apr 30 00:47:39.597024 containerd[2002]: time="2025-04-30T00:47:39.596965079Z" level=info msg="StopPodSandbox for \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\"" Apr 30 00:47:39.597180 containerd[2002]: time="2025-04-30T00:47:39.597135215Z" level=info msg="TearDown network for sandbox \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\" successfully" Apr 30 00:47:39.597180 containerd[2002]: time="2025-04-30T00:47:39.597163391Z" level=info msg="StopPodSandbox for \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\" returns successfully" Apr 30 00:47:39.597908 containerd[2002]: time="2025-04-30T00:47:39.597865631Z" level=info msg="RemovePodSandbox for \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\"" Apr 30 00:47:39.598021 containerd[2002]: time="2025-04-30T00:47:39.597917219Z" level=info msg="Forcibly stopping sandbox \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\"" Apr 30 00:47:39.598109 containerd[2002]: time="2025-04-30T00:47:39.598018775Z" level=info msg="TearDown network for sandbox \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\" successfully" Apr 30 00:47:39.604037 containerd[2002]: time="2025-04-30T00:47:39.603969191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:47:39.604205 containerd[2002]: time="2025-04-30T00:47:39.604057991Z" level=info msg="RemovePodSandbox \"b44f30c75bf82f690c957fe06ede5f6f0a7917621e393ad61ec0b7615136262b\" returns successfully" Apr 30 00:47:40.084785 systemd[1]: cri-containerd-5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e.scope: Deactivated successfully. Apr 30 00:47:40.086990 systemd[1]: cri-containerd-5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e.scope: Consumed 5.084s CPU time, 15.1M memory peak, 0B memory swap peak. Apr 30 00:47:40.126659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e-rootfs.mount: Deactivated successfully. Apr 30 00:47:40.137704 containerd[2002]: time="2025-04-30T00:47:40.137597314Z" level=info msg="shim disconnected" id=5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e namespace=k8s.io Apr 30 00:47:40.137704 containerd[2002]: time="2025-04-30T00:47:40.137692186Z" level=warning msg="cleaning up after shim disconnected" id=5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e namespace=k8s.io Apr 30 00:47:40.138268 containerd[2002]: time="2025-04-30T00:47:40.137715874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:40.440940 kubelet[3421]: I0430 00:47:40.440743 3421 scope.go:117] "RemoveContainer" containerID="5dbd9d87d2813b5727a2452d53a7f7ab5fd3b29ce3a473e651385620c664f80e" Apr 30 00:47:40.444004 containerd[2002]: time="2025-04-30T00:47:40.443681255Z" level=info msg="CreateContainer within sandbox \"c4e928f5ebfb15ea7a7b18139ef44e268316d6087d57952426fe7eee5d172abb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 00:47:40.470796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073862096.mount: Deactivated successfully. Apr 30 00:47:40.471883 containerd[2002]: time="2025-04-30T00:47:40.471630743Z" level=info msg="CreateContainer within sandbox \"c4e928f5ebfb15ea7a7b18139ef44e268316d6087d57952426fe7eee5d172abb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"667c75e926db5dfe8931c198a2cb84e3836002caf2518104ef3c046ecd04441b\"" Apr 30 00:47:40.473144 containerd[2002]: time="2025-04-30T00:47:40.473045219Z" level=info msg="StartContainer for \"667c75e926db5dfe8931c198a2cb84e3836002caf2518104ef3c046ecd04441b\"" Apr 30 00:47:40.535393 systemd[1]: Started cri-containerd-667c75e926db5dfe8931c198a2cb84e3836002caf2518104ef3c046ecd04441b.scope - libcontainer container 667c75e926db5dfe8931c198a2cb84e3836002caf2518104ef3c046ecd04441b. Apr 30 00:47:40.617420 containerd[2002]: time="2025-04-30T00:47:40.617340180Z" level=info msg="StartContainer for \"667c75e926db5dfe8931c198a2cb84e3836002caf2518104ef3c046ecd04441b\" returns successfully" Apr 30 00:47:43.155449 kubelet[3421]: E0430 00:47:43.154886 3421 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-219?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 00:47:53.155642 kubelet[3421]: E0430 00:47:53.155209 3421 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-219?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"