Apr 30 00:43:38.228332 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 30 00:43:38.228382 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:43:38.228453 kernel: KASLR disabled due to lack of seed Apr 30 00:43:38.228472 kernel: efi: EFI v2.7 by EDK II Apr 30 00:43:38.228489 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Apr 30 00:43:38.228506 kernel: ACPI: Early table checksum verification disabled Apr 30 00:43:38.228524 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 30 00:43:38.228540 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 00:43:38.228557 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 00:43:38.228573 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Apr 30 00:43:38.228594 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 00:43:38.228610 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 30 00:43:38.228627 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 30 00:43:38.228643 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 30 00:43:38.228662 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 00:43:38.228683 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 30 00:43:38.228701 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 30 00:43:38.228719 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 30 00:43:38.228737 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 30 00:43:38.228757 kernel: printk: bootconsole [uart0] enabled Apr 30 00:43:38.228774 kernel: NUMA: Failed to initialise from firmware Apr 30 00:43:38.228793 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 00:43:38.228813 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 30 00:43:38.228832 kernel: Zone ranges: Apr 30 00:43:38.228851 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 00:43:38.228869 kernel: DMA32 empty Apr 30 00:43:38.228891 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 30 00:43:38.228909 kernel: Movable zone start for each node Apr 30 00:43:38.228927 kernel: Early memory node ranges Apr 30 00:43:38.228945 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 30 00:43:38.228962 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 30 00:43:38.228980 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 30 00:43:38.228997 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 30 00:43:38.229015 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 30 00:43:38.229033 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 30 00:43:38.229051 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 30 00:43:38.229068 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 30 00:43:38.229086 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 00:43:38.229108 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 30 00:43:38.229127 kernel: psci: probing for conduit method from ACPI. Apr 30 00:43:38.229151 kernel: psci: PSCIv1.0 detected in firmware. Apr 30 00:43:38.229170 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:43:38.229189 kernel: psci: Trusted OS migration not required Apr 30 00:43:38.229211 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:43:38.229229 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:43:38.229247 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:43:38.229266 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:43:38.229284 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:43:38.229303 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:43:38.229321 kernel: CPU features: detected: Spectre-v2 Apr 30 00:43:38.229340 kernel: CPU features: detected: Spectre-v3a Apr 30 00:43:38.229359 kernel: CPU features: detected: Spectre-BHB Apr 30 00:43:38.229377 kernel: CPU features: detected: ARM erratum 1742098 Apr 30 00:43:38.229432 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 30 00:43:38.229464 kernel: alternatives: applying boot alternatives Apr 30 00:43:38.229487 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:43:38.229508 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:43:38.229527 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:43:38.229545 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:43:38.229564 kernel: Fallback order for Node 0: 0 Apr 30 00:43:38.229582 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 30 00:43:38.229600 kernel: Policy zone: Normal Apr 30 00:43:38.229618 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:43:38.229635 kernel: software IO TLB: area num 2. Apr 30 00:43:38.229653 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 30 00:43:38.229677 kernel: Memory: 3820152K/4030464K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210312K reserved, 0K cma-reserved) Apr 30 00:43:38.229696 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:43:38.229714 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:43:38.229733 kernel: rcu: RCU event tracing is enabled. Apr 30 00:43:38.229752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:43:38.229770 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:43:38.229789 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:43:38.229807 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:43:38.229825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:43:38.229843 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:43:38.229860 kernel: GICv3: 96 SPIs implemented Apr 30 00:43:38.229883 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:43:38.229901 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:43:38.229919 kernel: GICv3: GICv3 features: 16 PPIs Apr 30 00:43:38.229936 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 30 00:43:38.229954 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 30 00:43:38.229973 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:43:38.229992 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:43:38.230010 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 30 00:43:38.230028 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 30 00:43:38.230046 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 30 00:43:38.230064 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:43:38.230083 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 30 00:43:38.230106 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 30 00:43:38.230124 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 30 00:43:38.230143 kernel: Console: colour dummy device 80x25 Apr 30 00:43:38.230161 kernel: printk: console [tty1] enabled Apr 30 00:43:38.230180 kernel: ACPI: Core revision 20230628 Apr 30 00:43:38.230198 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 30 00:43:38.230217 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:43:38.230236 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:43:38.230254 kernel: landlock: Up and running. Apr 30 00:43:38.230276 kernel: SELinux: Initializing. Apr 30 00:43:38.230296 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:43:38.230314 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:43:38.230333 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:43:38.230351 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:43:38.230370 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:43:38.230419 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:43:38.230444 kernel: Platform MSI: ITS@0x10080000 domain created Apr 30 00:43:38.230464 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 30 00:43:38.230491 kernel: Remapping and enabling EFI services. Apr 30 00:43:38.230510 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:43:38.230529 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:43:38.230548 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 30 00:43:38.230567 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 30 00:43:38.230587 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 30 00:43:38.230605 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:43:38.232737 kernel: SMP: Total of 2 processors activated. Apr 30 00:43:38.232759 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:43:38.232789 kernel: CPU features: detected: 32-bit EL1 Support Apr 30 00:43:38.232808 kernel: CPU features: detected: CRC32 instructions Apr 30 00:43:38.232828 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:43:38.232859 kernel: alternatives: applying system-wide alternatives Apr 30 00:43:38.232883 kernel: devtmpfs: initialized Apr 30 00:43:38.232903 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:43:38.232922 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:43:38.232942 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:43:38.232961 kernel: SMBIOS 3.0.0 present. Apr 30 00:43:38.232981 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 30 00:43:38.233006 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:43:38.233026 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:43:38.233046 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:43:38.233065 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:43:38.233085 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:43:38.233104 kernel: audit: type=2000 audit(0.291:1): state=initialized audit_enabled=0 res=1 Apr 30 00:43:38.233124 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:43:38.233148 kernel: cpuidle: using governor menu Apr 30 00:43:38.233168 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:43:38.233227 kernel: ASID allocator initialised with 65536 entries Apr 30 00:43:38.233252 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:43:38.233271 kernel: Serial: AMBA PL011 UART driver Apr 30 00:43:38.233295 kernel: Modules: 17504 pages in range for non-PLT usage Apr 30 00:43:38.233340 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:43:38.233421 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:43:38.233448 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:43:38.233477 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:43:38.233496 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:43:38.233516 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:43:38.233535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:43:38.233556 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:43:38.233575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:43:38.233594 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:43:38.233613 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:43:38.233632 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:43:38.233656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:43:38.233675 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:43:38.233695 kernel: ACPI: Interpreter enabled Apr 30 00:43:38.233714 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:43:38.233733 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:43:38.233753 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Apr 30 00:43:38.234233 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:43:38.234545 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:43:38.234777 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:43:38.234987 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Apr 30 00:43:38.235210 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Apr 30 00:43:38.235236 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 30 00:43:38.235276 kernel: acpiphp: Slot [1] registered Apr 30 00:43:38.235298 kernel: acpiphp: Slot [2] registered Apr 30 00:43:38.235317 kernel: acpiphp: Slot [3] registered Apr 30 00:43:38.235336 kernel: acpiphp: Slot [4] registered Apr 30 00:43:38.235362 kernel: acpiphp: Slot [5] registered Apr 30 00:43:38.235410 kernel: acpiphp: Slot [6] registered Apr 30 00:43:38.235445 kernel: acpiphp: Slot [7] registered Apr 30 00:43:38.235466 kernel: acpiphp: Slot [8] registered Apr 30 00:43:38.235486 kernel: acpiphp: Slot [9] registered Apr 30 00:43:38.235504 kernel: acpiphp: Slot [10] registered Apr 30 00:43:38.235524 kernel: acpiphp: Slot [11] registered Apr 30 00:43:38.235542 kernel: acpiphp: Slot [12] registered Apr 30 00:43:38.235561 kernel: acpiphp: Slot [13] registered Apr 30 00:43:38.235580 kernel: acpiphp: Slot [14] registered Apr 30 00:43:38.235605 kernel: acpiphp: Slot [15] registered Apr 30 00:43:38.235624 kernel: acpiphp: Slot [16] registered Apr 30 00:43:38.235643 kernel: acpiphp: Slot [17] registered Apr 30 00:43:38.235662 kernel: acpiphp: Slot [18] registered Apr 30 00:43:38.235681 kernel: acpiphp: Slot [19] registered Apr 30 00:43:38.235700 kernel: acpiphp: Slot [20] registered Apr 30 00:43:38.235719 kernel: acpiphp: Slot [21] registered Apr 30 00:43:38.235738 kernel: acpiphp: Slot [22] registered Apr 30 00:43:38.235756 kernel: acpiphp: Slot [23] registered Apr 30 00:43:38.235779 kernel: acpiphp: Slot [24] registered Apr 30 00:43:38.235799 kernel: acpiphp: Slot [25] registered Apr 30 00:43:38.235817 kernel: acpiphp: Slot [26] registered Apr 30 00:43:38.235836 kernel: acpiphp: Slot [27] registered Apr 30 00:43:38.235855 kernel: acpiphp: Slot [28] registered Apr 30 00:43:38.235874 kernel: acpiphp: Slot [29] registered Apr 30 00:43:38.235893 kernel: acpiphp: Slot [30] registered Apr 30 00:43:38.235911 kernel: acpiphp: Slot [31] registered Apr 30 00:43:38.235931 kernel: PCI host bridge to bus 0000:00 Apr 30 00:43:38.236199 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 30 00:43:38.237251 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:43:38.237553 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 30 00:43:38.237753 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Apr 30 00:43:38.238069 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 30 00:43:38.240855 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 30 00:43:38.241108 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 30 00:43:38.241356 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 00:43:38.241693 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 30 00:43:38.242021 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 00:43:38.242254 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 00:43:38.242616 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 30 00:43:38.242861 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 30 00:43:38.243104 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 30 00:43:38.243354 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 00:43:38.243625 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Apr 30 00:43:38.243840 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Apr 30 00:43:38.244053 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Apr 30 00:43:38.244269 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Apr 30 00:43:38.244582 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Apr 30 00:43:38.244814 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 30 00:43:38.245023 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:43:38.245218 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 30 00:43:38.245245 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:43:38.245265 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:43:38.245285 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:43:38.245305 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:43:38.245324 kernel: iommu: Default domain type: Translated Apr 30 00:43:38.245344 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:43:38.245373 kernel: efivars: Registered efivars operations Apr 30 00:43:38.246594 kernel: vgaarb: loaded Apr 30 00:43:38.246627 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:43:38.246647 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:43:38.246666 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:43:38.246686 kernel: pnp: PnP ACPI init Apr 30 00:43:38.246983 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 30 00:43:38.247013 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:43:38.247043 kernel: NET: Registered PF_INET protocol family Apr 30 00:43:38.247063 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:43:38.247083 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:43:38.247102 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:43:38.247121 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:43:38.247141 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:43:38.247160 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:43:38.247179 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:43:38.247198 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:43:38.247222 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:43:38.247242 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:43:38.247280 kernel: kvm [1]: HYP mode not available Apr 30 00:43:38.247301 kernel: Initialise system trusted keyrings Apr 30 00:43:38.247321 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:43:38.247341 kernel: Key type asymmetric registered Apr 30 00:43:38.247360 kernel: Asymmetric key parser 'x509' registered Apr 30 00:43:38.247380 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:43:38.247441 kernel: io scheduler mq-deadline registered Apr 30 00:43:38.248301 kernel: io scheduler kyber registered Apr 30 00:43:38.248324 kernel: io scheduler bfq registered Apr 30 00:43:38.252759 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 30 00:43:38.252824 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:43:38.252847 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:43:38.252868 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 30 00:43:38.252888 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 00:43:38.252908 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:43:38.252945 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 00:43:38.253206 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 30 00:43:38.253239 kernel: printk: console [ttyS0] disabled Apr 30 00:43:38.253260 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 30 00:43:38.253279 kernel: printk: console [ttyS0] enabled Apr 30 00:43:38.253298 kernel: printk: bootconsole [uart0] disabled Apr 30 00:43:38.253317 kernel: thunder_xcv, ver 1.0 Apr 30 00:43:38.253336 kernel: thunder_bgx, ver 1.0 Apr 30 00:43:38.253355 kernel: nicpf, ver 1.0 Apr 30 00:43:38.253382 kernel: nicvf, ver 1.0 Apr 30 00:43:38.253683 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:43:38.253884 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:43:37 UTC (1745973817) Apr 30 00:43:38.253912 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:43:38.253932 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 30 00:43:38.253951 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:43:38.253971 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:43:38.253990 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:43:38.254016 kernel: Segment Routing with IPv6 Apr 30 00:43:38.254035 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:43:38.254054 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:43:38.254074 kernel: Key type dns_resolver registered Apr 30 00:43:38.254093 kernel: registered taskstats version 1 Apr 30 00:43:38.254112 kernel: Loading compiled-in X.509 certificates Apr 30 00:43:38.254132 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:43:38.254151 kernel: Key type .fscrypt registered Apr 30 00:43:38.254169 kernel: Key type fscrypt-provisioning registered Apr 30 00:43:38.254192 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:43:38.254212 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:43:38.254231 kernel: ima: No architecture policies found Apr 30 00:43:38.254251 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:43:38.254270 kernel: clk: Disabling unused clocks Apr 30 00:43:38.254288 kernel: Freeing unused kernel memory: 39424K Apr 30 00:43:38.254308 kernel: Run /init as init process Apr 30 00:43:38.254327 kernel: with arguments: Apr 30 00:43:38.254346 kernel: /init Apr 30 00:43:38.254364 kernel: with environment: Apr 30 00:43:38.254447 kernel: HOME=/ Apr 30 00:43:38.254475 kernel: TERM=linux Apr 30 00:43:38.254496 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:43:38.254521 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:43:38.254548 systemd[1]: Detected virtualization amazon. Apr 30 00:43:38.254570 systemd[1]: Detected architecture arm64. Apr 30 00:43:38.254591 systemd[1]: Running in initrd. Apr 30 00:43:38.254619 systemd[1]: No hostname configured, using default hostname. Apr 30 00:43:38.254641 systemd[1]: Hostname set to . Apr 30 00:43:38.254663 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:43:38.254685 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:43:38.254707 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:38.254729 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:38.254752 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:43:38.254774 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:43:38.254802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:43:38.254824 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:43:38.254849 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:43:38.254871 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:43:38.254893 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:38.254914 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:38.254936 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:43:38.254963 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:43:38.254986 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:43:38.255008 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:43:38.255031 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:43:38.255053 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:43:38.255075 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:43:38.255097 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:43:38.255118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:38.255139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:38.255167 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:38.255189 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:43:38.255211 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:43:38.255232 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:43:38.255277 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:43:38.255303 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:43:38.255326 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:43:38.255347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:43:38.255377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:38.256604 systemd-journald[251]: Collecting audit messages is disabled. Apr 30 00:43:38.256672 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:43:38.256695 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:38.256727 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:43:38.256753 systemd-journald[251]: Journal started Apr 30 00:43:38.256796 systemd-journald[251]: Runtime Journal (/run/log/journal/ec21f717535a58ed97e708389061cfc9) is 8.0M, max 75.3M, 67.3M free. Apr 30 00:43:38.259657 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:43:38.236182 systemd-modules-load[252]: Inserted module 'overlay' Apr 30 00:43:38.277500 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:43:38.277599 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:43:38.282659 kernel: Bridge firewalling registered Apr 30 00:43:38.282816 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 30 00:43:38.287212 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:38.296265 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:38.312848 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:38.327783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:43:38.334834 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:43:38.353212 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:43:38.374950 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:43:38.391997 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:38.397263 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:38.413052 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:43:38.419126 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:38.434699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:43:38.443075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:38.463692 dracut-cmdline[284]: dracut-dracut-053 Apr 30 00:43:38.471821 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:43:38.531582 systemd-resolved[287]: Positive Trust Anchors: Apr 30 00:43:38.531620 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:43:38.531683 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:43:38.626420 kernel: SCSI subsystem initialized Apr 30 00:43:38.632429 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:43:38.645431 kernel: iscsi: registered transport (tcp) Apr 30 00:43:38.668425 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:43:38.668501 kernel: QLogic iSCSI HBA Driver Apr 30 00:43:38.764435 kernel: random: crng init done Apr 30 00:43:38.764733 systemd-resolved[287]: Defaulting to hostname 'linux'. Apr 30 00:43:38.768384 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:43:38.772803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:38.795627 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:43:38.805721 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:43:38.851459 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:43:38.851534 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:43:38.853320 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:43:38.920500 kernel: raid6: neonx8 gen() 6688 MB/s Apr 30 00:43:38.937444 kernel: raid6: neonx4 gen() 6539 MB/s Apr 30 00:43:38.954457 kernel: raid6: neonx2 gen() 5436 MB/s Apr 30 00:43:38.971441 kernel: raid6: neonx1 gen() 3933 MB/s Apr 30 00:43:38.988435 kernel: raid6: int64x8 gen() 3776 MB/s Apr 30 00:43:39.005440 kernel: raid6: int64x4 gen() 3681 MB/s Apr 30 00:43:39.022429 kernel: raid6: int64x2 gen() 3607 MB/s Apr 30 00:43:39.040326 kernel: raid6: int64x1 gen() 2749 MB/s Apr 30 00:43:39.040438 kernel: raid6: using algorithm neonx8 gen() 6688 MB/s Apr 30 00:43:39.058296 kernel: raid6: .... xor() 4836 MB/s, rmw enabled Apr 30 00:43:39.058369 kernel: raid6: using neon recovery algorithm Apr 30 00:43:39.067649 kernel: xor: measuring software checksum speed Apr 30 00:43:39.067733 kernel: 8regs : 10921 MB/sec Apr 30 00:43:39.068771 kernel: 32regs : 11945 MB/sec Apr 30 00:43:39.069975 kernel: arm64_neon : 9569 MB/sec Apr 30 00:43:39.070007 kernel: xor: using function: 32regs (11945 MB/sec) Apr 30 00:43:39.164456 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:43:39.189471 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:43:39.210718 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:39.257861 systemd-udevd[470]: Using default interface naming scheme 'v255'. Apr 30 00:43:39.268896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:39.286782 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:43:39.330505 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Apr 30 00:43:39.405175 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:43:39.413766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:43:39.547952 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:39.561733 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:43:39.611852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:43:39.622105 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:43:39.624699 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:39.643450 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:43:39.654899 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:43:39.697882 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:43:39.775474 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:43:39.775572 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 30 00:43:39.790597 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 00:43:39.790863 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 00:43:39.791110 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:40:1e:f3:24:f1 Apr 30 00:43:39.795215 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:43:39.806084 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:43:39.808356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:39.815925 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:39.818362 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:43:39.821158 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:39.825686 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:39.839895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:39.850434 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 00:43:39.853427 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 00:43:39.862076 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 00:43:39.874651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:43:39.874737 kernel: GPT:9289727 != 16777215 Apr 30 00:43:39.874764 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:43:39.874789 kernel: GPT:9289727 != 16777215 Apr 30 00:43:39.874814 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:43:39.874838 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:39.889635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:39.902241 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:39.966444 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (517) Apr 30 00:43:39.966917 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:40.006476 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (515) Apr 30 00:43:40.072916 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 00:43:40.134697 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 00:43:40.141716 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 00:43:40.158540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 00:43:40.177481 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 00:43:40.191646 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:43:40.207997 disk-uuid[661]: Primary Header is updated. Apr 30 00:43:40.207997 disk-uuid[661]: Secondary Entries is updated. Apr 30 00:43:40.207997 disk-uuid[661]: Secondary Header is updated. Apr 30 00:43:40.218449 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:40.229438 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:40.235441 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:41.241433 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:41.241514 disk-uuid[662]: The operation has completed successfully. Apr 30 00:43:41.478932 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:43:41.481292 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:43:41.540749 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:43:41.549429 sh[1007]: Success Apr 30 00:43:41.576789 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:43:41.723304 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:43:41.732625 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:43:41.738063 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:43:41.795261 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:43:41.795382 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:41.795445 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:43:41.798432 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:43:41.798528 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:43:41.829435 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:43:41.832928 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:43:41.837563 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:43:41.856881 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:43:41.864843 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:43:41.897204 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:41.897301 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:41.898660 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:41.908460 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:41.931300 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:43:41.934883 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:41.947276 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:43:41.957867 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:43:42.115274 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:43:42.138821 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:43:42.187093 ignition[1104]: Ignition 2.19.0 Apr 30 00:43:42.187126 ignition[1104]: Stage: fetch-offline Apr 30 00:43:42.188181 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:42.188210 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:42.189419 ignition[1104]: Ignition finished successfully Apr 30 00:43:42.199763 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:43:42.228745 systemd-networkd[1201]: lo: Link UP Apr 30 00:43:42.229294 systemd-networkd[1201]: lo: Gained carrier Apr 30 00:43:42.234553 systemd-networkd[1201]: Enumeration completed Apr 30 00:43:42.236367 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:43:42.237708 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:42.237717 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:43:42.246069 systemd[1]: Reached target network.target - Network. Apr 30 00:43:42.248428 systemd-networkd[1201]: eth0: Link UP Apr 30 00:43:42.248437 systemd-networkd[1201]: eth0: Gained carrier Apr 30 00:43:42.248457 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:42.284844 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:43:42.284997 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.24.0/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 00:43:42.318072 ignition[1209]: Ignition 2.19.0 Apr 30 00:43:42.318644 ignition[1209]: Stage: fetch Apr 30 00:43:42.321648 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:42.321683 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:42.321862 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:42.334349 ignition[1209]: PUT result: OK Apr 30 00:43:42.337561 ignition[1209]: parsed url from cmdline: "" Apr 30 00:43:42.337580 ignition[1209]: no config URL provided Apr 30 00:43:42.337596 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:43:42.337622 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:43:42.337656 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:42.341762 ignition[1209]: PUT result: OK Apr 30 00:43:42.341885 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 00:43:42.344320 ignition[1209]: GET result: OK Apr 30 00:43:42.357420 unknown[1209]: fetched base config from "system" Apr 30 00:43:42.345824 ignition[1209]: parsing config with SHA512: 10764284b8eec359da86a113f98539b6a7fa1c0dc0724c3c27110c2b2d9a809dbd8fc509a61980c772db5e5e18b48e6ea1a91dd75c40dda1a359f414ea872049 Apr 30 00:43:42.357440 unknown[1209]: fetched base config from "system" Apr 30 00:43:42.358515 ignition[1209]: fetch: fetch complete Apr 30 00:43:42.357455 unknown[1209]: fetched user config from "aws" Apr 30 00:43:42.358529 ignition[1209]: fetch: fetch passed Apr 30 00:43:42.358644 ignition[1209]: Ignition finished successfully Apr 30 00:43:42.371509 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:43:42.383819 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:43:42.426261 ignition[1216]: Ignition 2.19.0 Apr 30 00:43:42.426846 ignition[1216]: Stage: kargs Apr 30 00:43:42.427791 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:42.427830 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:42.428078 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:42.430866 ignition[1216]: PUT result: OK Apr 30 00:43:42.442807 ignition[1216]: kargs: kargs passed Apr 30 00:43:42.443019 ignition[1216]: Ignition finished successfully Apr 30 00:43:42.448783 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:43:42.460769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:43:42.493467 ignition[1223]: Ignition 2.19.0 Apr 30 00:43:42.494101 ignition[1223]: Stage: disks Apr 30 00:43:42.494932 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:42.494962 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:42.495143 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:42.499691 ignition[1223]: PUT result: OK Apr 30 00:43:42.509245 ignition[1223]: disks: disks passed Apr 30 00:43:42.510869 ignition[1223]: Ignition finished successfully Apr 30 00:43:42.515464 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:43:42.518881 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:43:42.524295 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:43:42.526739 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:43:42.528786 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:43:42.530863 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:43:42.543766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:43:42.608259 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:43:42.617714 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:43:42.627651 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:43:42.726438 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:43:42.727727 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:43:42.731696 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:43:42.748617 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:43:42.762883 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:43:42.768044 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:43:42.770695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:43:42.770754 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:43:42.787277 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:43:42.802476 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Apr 30 00:43:42.802694 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:43:42.810558 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:42.811552 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:42.812944 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:42.827467 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:42.832575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:43:42.919579 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:43:42.931091 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:43:42.941476 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:43:42.952562 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:43:43.150505 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:43:43.172824 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:43:43.180216 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:43:43.201458 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:43:43.204618 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:43.257978 ignition[1362]: INFO : Ignition 2.19.0 Apr 30 00:43:43.257978 ignition[1362]: INFO : Stage: mount Apr 30 00:43:43.262749 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:43.262749 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:43.262749 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:43.262749 ignition[1362]: INFO : PUT result: OK Apr 30 00:43:43.265842 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:43:43.276858 ignition[1362]: INFO : mount: mount passed Apr 30 00:43:43.278577 ignition[1362]: INFO : Ignition finished successfully Apr 30 00:43:43.283350 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:43:43.296653 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:43:43.319987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:43:43.357983 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1375) Apr 30 00:43:43.358059 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:43.359847 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:43.361199 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:43.366444 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:43.372656 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:43:43.425570 ignition[1392]: INFO : Ignition 2.19.0 Apr 30 00:43:43.425570 ignition[1392]: INFO : Stage: files Apr 30 00:43:43.429206 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:43.429206 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:43.433911 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:43.437203 ignition[1392]: INFO : PUT result: OK Apr 30 00:43:43.442332 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:43:43.446090 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:43:43.446090 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:43:43.460165 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:43:43.463313 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:43:43.466556 unknown[1392]: wrote ssh authorized keys file for user: core Apr 30 00:43:43.470283 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:43:43.474404 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:43:43.474404 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:43:43.782792 systemd-networkd[1201]: eth0: Gained IPv6LL Apr 30 00:43:44.391996 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 30 00:43:48.403338 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:43:48.407867 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:43:48.407867 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 00:43:48.861920 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:43:48.990615 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:43:48.990615 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:43:48.998956 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:43:49.451721 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 30 00:43:49.772587 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:43:49.772587 ignition[1392]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 30 00:43:49.779580 ignition[1392]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:43:49.779580 ignition[1392]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:43:49.779580 ignition[1392]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 30 00:43:49.779580 ignition[1392]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:43:49.779580 ignition[1392]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:43:49.779580 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:43:49.779580 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:43:49.779580 ignition[1392]: INFO : files: files passed Apr 30 00:43:49.779580 ignition[1392]: INFO : Ignition finished successfully Apr 30 00:43:49.786507 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:43:49.812942 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:43:49.834736 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:43:49.837671 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:43:49.837863 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:43:49.870275 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:49.874021 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:49.877458 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:49.882654 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:43:49.888868 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:43:49.899788 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:43:49.969341 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:43:49.969628 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:43:49.974041 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:43:49.976297 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:43:49.980163 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:43:49.998845 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:43:50.030503 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:43:50.053876 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:43:50.078159 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:50.080894 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:50.085295 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:43:50.090707 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:43:50.090955 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:43:50.098019 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:43:50.101227 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:43:50.107000 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:43:50.110473 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:43:50.114363 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:43:50.120501 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:43:50.122932 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:43:50.127021 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:43:50.129620 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:43:50.135503 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:43:50.137815 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:43:50.138054 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:43:50.147329 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:50.149512 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:50.151897 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:43:50.156186 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:50.158698 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:43:50.158947 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:43:50.163298 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:43:50.163581 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:43:50.168655 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:43:50.168857 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:43:50.192829 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:43:50.199679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:43:50.200650 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:50.217065 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:43:50.218953 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:43:50.221508 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:50.224544 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:43:50.226509 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:43:50.244190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:43:50.247762 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:43:50.256765 ignition[1444]: INFO : Ignition 2.19.0 Apr 30 00:43:50.256765 ignition[1444]: INFO : Stage: umount Apr 30 00:43:50.256765 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:50.256765 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:50.256765 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:50.269153 ignition[1444]: INFO : PUT result: OK Apr 30 00:43:50.273911 ignition[1444]: INFO : umount: umount passed Apr 30 00:43:50.273911 ignition[1444]: INFO : Ignition finished successfully Apr 30 00:43:50.280090 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:43:50.280681 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:43:50.285471 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:43:50.285609 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:43:50.292307 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:43:50.292522 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:43:50.299800 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:43:50.299914 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:43:50.303792 systemd[1]: Stopped target network.target - Network. Apr 30 00:43:50.307630 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:43:50.307738 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:43:50.312825 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:43:50.319325 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:43:50.325172 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:50.345329 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:43:50.347104 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:43:50.349018 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:43:50.349105 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:43:50.351023 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:43:50.351094 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:43:50.353075 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:43:50.353173 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:43:50.355509 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:43:50.355608 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:43:50.359035 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:43:50.361297 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:43:50.369148 systemd-networkd[1201]: eth0: DHCPv6 lease lost Apr 30 00:43:50.373809 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:43:50.376620 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:43:50.378644 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:43:50.383749 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:43:50.384028 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:43:50.406483 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:43:50.406941 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:43:50.418155 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:43:50.420108 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:50.424120 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:43:50.424236 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:43:50.435360 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:43:50.443423 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:43:50.443557 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:43:50.446106 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:43:50.446191 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:50.448896 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:43:50.448978 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:50.451892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:43:50.452007 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:50.457966 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:50.476957 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:43:50.477269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:50.480563 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:43:50.480654 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:50.481150 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:43:50.481216 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:50.481443 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:43:50.481525 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:43:50.482104 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:43:50.482182 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:43:50.482998 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:43:50.483076 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:50.499899 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:43:50.517663 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:43:50.517808 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:50.525214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:43:50.525325 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:50.539889 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:43:50.540916 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:43:50.583960 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:43:50.584451 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:43:50.591221 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:43:50.603014 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:43:50.632136 systemd[1]: Switching root. Apr 30 00:43:50.672021 systemd-journald[251]: Journal stopped Apr 30 00:43:52.739901 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 30 00:43:52.740062 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:43:52.740116 kernel: SELinux: policy capability open_perms=1 Apr 30 00:43:52.740150 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:43:52.740185 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:43:52.740225 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:43:52.740260 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:43:52.740293 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:43:52.740325 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:43:52.740357 kernel: audit: type=1403 audit(1745973831.021:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:43:52.740441 systemd[1]: Successfully loaded SELinux policy in 53.133ms. Apr 30 00:43:52.740527 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.359ms. Apr 30 00:43:52.752022 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:43:52.752069 systemd[1]: Detected virtualization amazon. Apr 30 00:43:52.752119 systemd[1]: Detected architecture arm64. Apr 30 00:43:52.752159 systemd[1]: Detected first boot. Apr 30 00:43:52.752194 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:43:52.752230 zram_generator::config[1487]: No configuration found. Apr 30 00:43:52.752273 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:43:52.752307 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 00:43:52.752345 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 00:43:52.752381 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 00:43:52.755549 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:43:52.755595 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:43:52.755627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:43:52.755662 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:43:52.755697 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:43:52.755731 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:43:52.755762 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:43:52.755793 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:43:52.755830 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:52.755872 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:52.755907 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:43:52.755941 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:43:52.755974 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:43:52.756072 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:43:52.756116 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:43:52.756150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:52.756187 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 00:43:52.756222 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 00:43:52.756275 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 00:43:52.756309 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:43:52.756350 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:52.759909 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:43:52.760009 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:43:52.760047 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:43:52.760083 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:43:52.760126 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:43:52.760161 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:52.760195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:52.760229 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:52.760261 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:43:52.760292 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:43:52.760324 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:43:52.760362 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:43:52.770560 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:43:52.770641 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:43:52.770681 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:43:52.770717 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:43:52.770751 systemd[1]: Reached target machines.target - Containers. Apr 30 00:43:52.770782 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:43:52.770816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:52.770854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:43:52.770890 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:43:52.770921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:43:52.770961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:43:52.770992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:43:52.771022 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:43:52.771052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:43:52.771085 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:43:52.771117 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 00:43:52.771151 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 00:43:52.771208 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 00:43:52.771272 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 00:43:52.771316 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:43:52.771354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:43:52.771460 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:43:52.771506 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:43:52.771544 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:43:52.771577 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 00:43:52.771610 systemd[1]: Stopped verity-setup.service. Apr 30 00:43:52.771643 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:43:52.773678 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:43:52.773741 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:43:52.773772 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:43:52.773802 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:43:52.773832 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:43:52.773863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:52.773908 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:43:52.773938 kernel: loop: module loaded Apr 30 00:43:52.773967 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:43:52.774005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:43:52.774037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:43:52.774067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:43:52.774099 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:43:52.774129 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:43:52.774166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:43:52.774198 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:52.774243 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:43:52.774328 systemd-journald[1569]: Collecting audit messages is disabled. Apr 30 00:43:52.781554 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:43:52.781662 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:43:52.781708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:43:52.781742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:43:52.781777 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:43:52.781808 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:43:52.781841 kernel: fuse: init (API version 7.39) Apr 30 00:43:52.781876 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:43:52.781907 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:43:52.781941 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:43:52.781975 systemd-journald[1569]: Journal started Apr 30 00:43:52.782037 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec21f717535a58ed97e708389061cfc9) is 8.0M, max 75.3M, 67.3M free. Apr 30 00:43:52.114213 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:43:52.792839 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:43:52.140883 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 00:43:52.141820 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 00:43:52.814426 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:43:52.814531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:52.827428 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:43:52.827538 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:43:52.840628 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:43:52.848960 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:43:52.855548 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:43:52.857592 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:43:52.859517 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:43:52.864505 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:43:52.935028 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:43:52.944835 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:43:52.968042 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:43:52.983971 kernel: ACPI: bus type drm_connector registered Apr 30 00:43:52.994517 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:43:52.997955 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:43:53.001553 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:43:53.017854 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:43:53.026583 kernel: loop0: detected capacity change from 0 to 52536 Apr 30 00:43:53.042951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:53.046602 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:43:53.052031 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:43:53.065986 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:43:53.085256 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec21f717535a58ed97e708389061cfc9 is 35.280ms for 915 entries. Apr 30 00:43:53.085256 systemd-journald[1569]: System Journal (/var/log/journal/ec21f717535a58ed97e708389061cfc9) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:43:53.135190 systemd-journald[1569]: Received client request to flush runtime journal. Apr 30 00:43:53.139093 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:43:53.162350 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:43:53.173075 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:43:53.188360 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:43:53.207923 kernel: loop1: detected capacity change from 0 to 194096 Apr 30 00:43:53.257156 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:43:53.273687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:43:53.280495 kernel: loop2: detected capacity change from 0 to 114432 Apr 30 00:43:53.359481 kernel: loop3: detected capacity change from 0 to 114328 Apr 30 00:43:53.377156 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Apr 30 00:43:53.377190 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Apr 30 00:43:53.405865 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:53.409933 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:53.424814 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:43:53.440787 kernel: loop4: detected capacity change from 0 to 52536 Apr 30 00:43:53.481381 kernel: loop5: detected capacity change from 0 to 194096 Apr 30 00:43:53.496709 udevadm[1641]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 30 00:43:53.537897 kernel: loop6: detected capacity change from 0 to 114432 Apr 30 00:43:53.574434 kernel: loop7: detected capacity change from 0 to 114328 Apr 30 00:43:53.607068 (sd-merge)[1642]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 00:43:53.611358 (sd-merge)[1642]: Merged extensions into '/usr'. Apr 30 00:43:53.623858 systemd[1]: Reloading requested from client PID 1594 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:43:53.623893 systemd[1]: Reloading... Apr 30 00:43:53.823428 zram_generator::config[1669]: No configuration found. Apr 30 00:43:53.922522 ldconfig[1587]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:43:54.152506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:43:54.276916 systemd[1]: Reloading finished in 651 ms. Apr 30 00:43:54.326871 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:43:54.330843 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:43:54.356574 systemd[1]: Starting ensure-sysext.service... Apr 30 00:43:54.361634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:43:54.388673 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:43:54.388718 systemd[1]: Reloading... Apr 30 00:43:54.446939 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:43:54.449338 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:43:54.453299 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:43:54.454066 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Apr 30 00:43:54.454304 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Apr 30 00:43:54.463897 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:43:54.464193 systemd-tmpfiles[1722]: Skipping /boot Apr 30 00:43:54.487934 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:43:54.488123 systemd-tmpfiles[1722]: Skipping /boot Apr 30 00:43:54.550484 zram_generator::config[1750]: No configuration found. Apr 30 00:43:54.804976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:43:54.922737 systemd[1]: Reloading finished in 533 ms. Apr 30 00:43:54.953218 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:43:54.961737 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:54.986750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:43:54.992760 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:43:55.003842 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:43:55.015846 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:43:55.024838 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:55.033127 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:43:55.048297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:55.063120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:43:55.075137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:43:55.083320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:43:55.086833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:55.096739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:55.097312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:55.108058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:55.124563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:43:55.128609 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:55.129258 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:43:55.142864 systemd[1]: Finished ensure-sysext.service. Apr 30 00:43:55.174142 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:43:55.216187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:43:55.216608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:43:55.228468 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:43:55.250950 systemd-udevd[1808]: Using default interface naming scheme 'v255'. Apr 30 00:43:55.252831 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:43:55.253898 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:43:55.257662 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:43:55.272929 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:43:55.280046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:43:55.280506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:43:55.283731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:43:55.289325 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:43:55.289783 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:43:55.292754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:43:55.327266 augenrules[1837]: No rules Apr 30 00:43:55.331302 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:43:55.344724 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:43:55.369908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:55.384720 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:43:55.388576 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:43:55.393549 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:43:55.408796 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:43:55.612483 systemd-networkd[1846]: lo: Link UP Apr 30 00:43:55.612503 systemd-networkd[1846]: lo: Gained carrier Apr 30 00:43:55.613991 systemd-networkd[1846]: Enumeration completed Apr 30 00:43:55.614171 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:43:55.634268 (udev-worker)[1861]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:43:55.655823 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:43:55.660829 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 00:43:55.680102 systemd-resolved[1807]: Positive Trust Anchors: Apr 30 00:43:55.680157 systemd-resolved[1807]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:43:55.680222 systemd-resolved[1807]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:43:55.691572 systemd-resolved[1807]: Defaulting to hostname 'linux'. Apr 30 00:43:55.694858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:43:55.697321 systemd[1]: Reached target network.target - Network. Apr 30 00:43:55.699234 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:55.779234 systemd-networkd[1846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:55.779254 systemd-networkd[1846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:43:55.784494 systemd-networkd[1846]: eth0: Link UP Apr 30 00:43:55.784851 systemd-networkd[1846]: eth0: Gained carrier Apr 30 00:43:55.784896 systemd-networkd[1846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:55.798609 systemd-networkd[1846]: eth0: DHCPv4 address 172.31.24.0/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 00:43:55.916465 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1861) Apr 30 00:43:55.959204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:56.172515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:56.191188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 00:43:56.195534 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:43:56.204748 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:43:56.213910 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:43:56.233808 lvm[1974]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:43:56.256300 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:43:56.278600 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:43:56.281983 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:56.284942 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:43:56.287423 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:43:56.290037 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:43:56.293170 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:43:56.295902 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:43:56.298369 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:43:56.300908 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:43:56.300965 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:43:56.302805 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:43:56.308496 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:43:56.313903 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:43:56.334141 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:43:56.342795 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:43:56.346577 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:43:56.349257 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:43:56.351631 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:43:56.353695 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:43:56.353745 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:43:56.359653 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:43:56.365779 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:43:56.374786 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:43:56.383440 lvm[1981]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:43:56.387762 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:43:56.404881 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:43:56.407157 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:43:56.428888 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:43:56.438891 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 00:43:56.461922 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:43:56.468667 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 00:43:56.476756 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:43:56.490561 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:43:56.499826 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:43:56.503894 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:43:56.505102 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:43:56.511934 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:43:56.517522 jq[1985]: false Apr 30 00:43:56.517941 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:43:56.531793 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:43:56.532321 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:43:56.569568 dbus-daemon[1984]: [system] SELinux support is enabled Apr 30 00:43:56.578193 dbus-daemon[1984]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1846 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 00:43:56.580086 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:43:56.588521 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:43:56.592157 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:43:56.592235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:43:56.594874 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:43:56.594915 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:43:56.606094 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 00:43:56.615056 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:43:56.616576 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:43:56.631748 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 00:43:56.654061 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:43:56.655710 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:43:56.697578 extend-filesystems[1986]: Found loop4 Apr 30 00:43:56.697578 extend-filesystems[1986]: Found loop5 Apr 30 00:43:56.697578 extend-filesystems[1986]: Found loop6 Apr 30 00:43:56.697578 extend-filesystems[1986]: Found loop7 Apr 30 00:43:56.697578 extend-filesystems[1986]: Found nvme0n1 Apr 30 00:43:56.697578 extend-filesystems[1986]: Found nvme0n1p1 Apr 30 00:43:56.697578 extend-filesystems[1986]: Found nvme0n1p2 Apr 30 00:43:56.697578 extend-filesystems[1986]: Found nvme0n1p3 Apr 30 00:43:56.697578 extend-filesystems[1986]: Found usr Apr 30 00:43:56.727948 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:34 UTC 2025 (1): Starting Apr 30 00:43:56.727948 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 00:43:56.727948 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: ---------------------------------------------------- Apr 30 00:43:56.727948 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Apr 30 00:43:56.727948 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 00:43:56.727948 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: corporation. Support and training for ntp-4 are Apr 30 00:43:56.727948 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: available at https://www.nwtime.org/support Apr 30 00:43:56.727948 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: ---------------------------------------------------- Apr 30 00:43:56.706531 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:34 UTC 2025 (1): Starting Apr 30 00:43:56.742834 extend-filesystems[1986]: Found nvme0n1p4 Apr 30 00:43:56.742834 extend-filesystems[1986]: Found nvme0n1p6 Apr 30 00:43:56.742834 extend-filesystems[1986]: Found nvme0n1p7 Apr 30 00:43:56.742834 extend-filesystems[1986]: Found nvme0n1p9 Apr 30 00:43:56.742834 extend-filesystems[1986]: Checking size of /dev/nvme0n1p9 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: proto: precision = 0.108 usec (-23) Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: basedate set to 2025-04-17 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: gps base set to 2025-04-20 (week 2363) Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: Listen normally on 3 eth0 172.31.24.0:123 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: Listen normally on 4 lo [::1]:123 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: bind(21) AF_INET6 fe80::440:1eff:fef3:24f1%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: unable to create socket on eth0 (5) for fe80::440:1eff:fef3:24f1%2#123 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: failed to init interface for address fe80::440:1eff:fef3:24f1%2 Apr 30 00:43:56.763303 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Apr 30 00:43:56.706581 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 00:43:56.764072 jq[1997]: true Apr 30 00:43:56.784109 tar[2002]: linux-arm64/helm Apr 30 00:43:56.744294 (ntainerd)[2021]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:43:56.706602 ntpd[1988]: ---------------------------------------------------- Apr 30 00:43:56.785211 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:56.785211 ntpd[1988]: 30 Apr 00:43:56 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:56.706621 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Apr 30 00:43:56.706640 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 00:43:56.706658 ntpd[1988]: corporation. Support and training for ntp-4 are Apr 30 00:43:56.706677 ntpd[1988]: available at https://www.nwtime.org/support Apr 30 00:43:56.706696 ntpd[1988]: ---------------------------------------------------- Apr 30 00:43:56.730674 ntpd[1988]: proto: precision = 0.108 usec (-23) Apr 30 00:43:56.731103 ntpd[1988]: basedate set to 2025-04-17 Apr 30 00:43:56.731128 ntpd[1988]: gps base set to 2025-04-20 (week 2363) Apr 30 00:43:56.738565 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 00:43:56.738659 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 00:43:56.752225 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 00:43:56.752309 ntpd[1988]: Listen normally on 3 eth0 172.31.24.0:123 Apr 30 00:43:56.752379 ntpd[1988]: Listen normally on 4 lo [::1]:123 Apr 30 00:43:56.752502 ntpd[1988]: bind(21) AF_INET6 fe80::440:1eff:fef3:24f1%2#123 flags 0x11 failed: Cannot assign requested address Apr 30 00:43:56.752546 ntpd[1988]: unable to create socket on eth0 (5) for fe80::440:1eff:fef3:24f1%2#123 Apr 30 00:43:56.752576 ntpd[1988]: failed to init interface for address fe80::440:1eff:fef3:24f1%2 Apr 30 00:43:56.752636 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Apr 30 00:43:56.770035 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:56.770094 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:56.822944 jq[2025]: true Apr 30 00:43:56.836761 update_engine[1996]: I20250430 00:43:56.827122 1996 main.cc:92] Flatcar Update Engine starting Apr 30 00:43:56.844680 update_engine[1996]: I20250430 00:43:56.840018 1996 update_check_scheduler.cc:74] Next update check in 11m45s Apr 30 00:43:56.846968 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:43:56.856471 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:43:56.879752 extend-filesystems[1986]: Resized partition /dev/nvme0n1p9 Apr 30 00:43:56.893969 extend-filesystems[2037]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:43:56.900545 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 00:43:56.922728 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 00:43:56.952542 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:43:57.004842 systemd-logind[1995]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:43:57.004891 systemd-logind[1995]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 30 00:43:57.005233 systemd-logind[1995]: New seat seat0. Apr 30 00:43:57.007874 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:43:57.071446 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 00:43:57.102001 extend-filesystems[2037]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 00:43:57.102001 extend-filesystems[2037]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:43:57.102001 extend-filesystems[2037]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 00:43:57.124703 extend-filesystems[1986]: Resized filesystem in /dev/nvme0n1p9 Apr 30 00:43:57.118820 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:43:57.119582 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:43:57.153365 bash[2065]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:43:57.158625 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:43:57.204451 coreos-metadata[1983]: Apr 30 00:43:57.201 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 00:43:57.254836 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1860) Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.209 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.216 INFO Fetch successful Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.216 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.217 INFO Fetch successful Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.217 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.218 INFO Fetch successful Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.218 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.219 INFO Fetch successful Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.221 INFO Fetch failed with 404: resource not found Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.225 INFO Fetch successful Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.225 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.231 INFO Fetch successful Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.231 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.234 INFO Fetch successful Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.236 INFO Fetch successful Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.236 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 00:43:57.254954 coreos-metadata[1983]: Apr 30 00:43:57.239 INFO Fetch successful Apr 30 00:43:57.245190 systemd[1]: Starting sshkeys.service... Apr 30 00:43:57.340604 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:43:57.357788 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:43:57.376528 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:43:57.378951 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:43:57.478582 systemd-networkd[1846]: eth0: Gained IPv6LL Apr 30 00:43:57.478855 locksmithd[2033]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:43:57.500972 containerd[2021]: time="2025-04-30T00:43:57.496876683Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:43:57.505518 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:43:57.510793 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 00:43:57.511243 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:43:57.523292 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2012 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 00:43:57.570249 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 00:43:57.578922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:43:57.587063 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:43:57.604074 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 00:43:57.670815 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 00:43:57.793940 polkitd[2181]: Started polkitd version 121 Apr 30 00:43:57.857085 amazon-ssm-agent[2154]: Initializing new seelog logger Apr 30 00:43:57.861416 amazon-ssm-agent[2154]: New Seelog Logger Creation Complete Apr 30 00:43:57.861416 amazon-ssm-agent[2154]: 2025/04/30 00:43:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:57.861416 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:57.861416 amazon-ssm-agent[2154]: 2025/04/30 00:43:57 processing appconfig overrides Apr 30 00:43:57.862514 polkitd[2181]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 00:43:57.862646 polkitd[2181]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 00:43:57.864054 containerd[2021]: time="2025-04-30T00:43:57.863975081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:57.869674 amazon-ssm-agent[2154]: 2025/04/30 00:43:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:57.869674 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:57.869674 amazon-ssm-agent[2154]: 2025/04/30 00:43:57 processing appconfig overrides Apr 30 00:43:57.869674 amazon-ssm-agent[2154]: 2025/04/30 00:43:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:57.869674 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:57.869674 amazon-ssm-agent[2154]: 2025/04/30 00:43:57 processing appconfig overrides Apr 30 00:43:57.869674 amazon-ssm-agent[2154]: 2025-04-30 00:43:57 INFO Proxy environment variables: Apr 30 00:43:57.868734 polkitd[2181]: Finished loading, compiling and executing 2 rules Apr 30 00:43:57.876006 containerd[2021]: time="2025-04-30T00:43:57.874087769Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:57.876006 containerd[2021]: time="2025-04-30T00:43:57.874181813Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:43:57.876006 containerd[2021]: time="2025-04-30T00:43:57.874225481Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:43:57.876006 containerd[2021]: time="2025-04-30T00:43:57.874706765Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:43:57.876006 containerd[2021]: time="2025-04-30T00:43:57.874786457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:57.876006 containerd[2021]: time="2025-04-30T00:43:57.874955093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:57.876006 containerd[2021]: time="2025-04-30T00:43:57.874990781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:57.877715 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 00:43:57.880012 amazon-ssm-agent[2154]: 2025/04/30 00:43:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:57.880012 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:57.880178 containerd[2021]: time="2025-04-30T00:43:57.875369729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:57.880178 containerd[2021]: time="2025-04-30T00:43:57.879205913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:57.880178 containerd[2021]: time="2025-04-30T00:43:57.879267761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:57.880178 containerd[2021]: time="2025-04-30T00:43:57.879295637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:57.880178 containerd[2021]: time="2025-04-30T00:43:57.879545429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:57.880178 containerd[2021]: time="2025-04-30T00:43:57.879946793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:57.884945 amazon-ssm-agent[2154]: 2025/04/30 00:43:57 processing appconfig overrides Apr 30 00:43:57.885052 containerd[2021]: time="2025-04-30T00:43:57.884795693Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:57.884151 polkitd[2181]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 00:43:57.881619 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 00:43:57.885416 containerd[2021]: time="2025-04-30T00:43:57.884909981Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:43:57.885966 containerd[2021]: time="2025-04-30T00:43:57.885721361Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:43:57.886144 containerd[2021]: time="2025-04-30T00:43:57.886102961Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:43:57.896537 containerd[2021]: time="2025-04-30T00:43:57.896464385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:43:57.897009 containerd[2021]: time="2025-04-30T00:43:57.896739941Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:43:57.897009 containerd[2021]: time="2025-04-30T00:43:57.896964953Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:43:57.897243 containerd[2021]: time="2025-04-30T00:43:57.897180281Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:43:57.900470 containerd[2021]: time="2025-04-30T00:43:57.897221465Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:43:57.900470 containerd[2021]: time="2025-04-30T00:43:57.899654117Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:43:57.900470 containerd[2021]: time="2025-04-30T00:43:57.900119213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:43:57.900470 containerd[2021]: time="2025-04-30T00:43:57.900346661Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.900378737Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901413881Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901458305Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901513517Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901546493Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901603445Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901640657Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901692737Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901727717Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:43:57.901842 containerd[2021]: time="2025-04-30T00:43:57.901762169Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:43:57.902562 containerd[2021]: time="2025-04-30T00:43:57.901804481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.902562 containerd[2021]: time="2025-04-30T00:43:57.902372297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.902562 containerd[2021]: time="2025-04-30T00:43:57.902427377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.902562 containerd[2021]: time="2025-04-30T00:43:57.902473193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.902562 containerd[2021]: time="2025-04-30T00:43:57.902509649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.902930837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.902977133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.903010241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.903043793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.903078209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.903109361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.903158381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.903193637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.903362 containerd[2021]: time="2025-04-30T00:43:57.903245813Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.904288913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.904348097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.904375853Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.904909853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.904956365Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.905534957Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.905568137Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.905592821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.905628713Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.905652593Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:43:57.907439 containerd[2021]: time="2025-04-30T00:43:57.905677925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:43:57.907978 containerd[2021]: time="2025-04-30T00:43:57.906183689Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:43:57.907978 containerd[2021]: time="2025-04-30T00:43:57.906290141Z" level=info msg="Connect containerd service" Apr 30 00:43:57.908433 containerd[2021]: time="2025-04-30T00:43:57.906381149Z" level=info msg="using legacy CRI server" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.910443125Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.910687661Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.911821205Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.912504869Z" level=info msg="Start subscribing containerd event" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.912602081Z" level=info msg="Start recovering state" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.912735425Z" level=info msg="Start event monitor" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.912759905Z" level=info msg="Start snapshots syncer" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.912783701Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:43:57.914415 containerd[2021]: time="2025-04-30T00:43:57.912803525Z" level=info msg="Start streaming server" Apr 30 00:43:57.919614 containerd[2021]: time="2025-04-30T00:43:57.917669609Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:43:57.919614 containerd[2021]: time="2025-04-30T00:43:57.917802713Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:43:57.921467 containerd[2021]: time="2025-04-30T00:43:57.921420305Z" level=info msg="containerd successfully booted in 0.430632s" Apr 30 00:43:57.921548 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:43:57.936220 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:43:57.970484 amazon-ssm-agent[2154]: 2025-04-30 00:43:57 INFO https_proxy: Apr 30 00:43:57.972771 systemd-hostnamed[2012]: Hostname set to (transient) Apr 30 00:43:57.972771 systemd-resolved[1807]: System hostname changed to 'ip-172-31-24-0'. Apr 30 00:43:58.038038 coreos-metadata[2110]: Apr 30 00:43:58.037 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 00:43:58.041579 coreos-metadata[2110]: Apr 30 00:43:58.038 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 00:43:58.046850 coreos-metadata[2110]: Apr 30 00:43:58.046 INFO Fetch successful Apr 30 00:43:58.046850 coreos-metadata[2110]: Apr 30 00:43:58.046 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 00:43:58.050131 coreos-metadata[2110]: Apr 30 00:43:58.048 INFO Fetch successful Apr 30 00:43:58.052826 unknown[2110]: wrote ssh authorized keys file for user: core Apr 30 00:43:58.076636 amazon-ssm-agent[2154]: 2025-04-30 00:43:57 INFO http_proxy: Apr 30 00:43:58.125509 update-ssh-keys[2205]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:43:58.129593 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:43:58.136960 systemd[1]: Finished sshkeys.service. Apr 30 00:43:58.183500 amazon-ssm-agent[2154]: 2025-04-30 00:43:57 INFO no_proxy: Apr 30 00:43:58.278742 amazon-ssm-agent[2154]: 2025-04-30 00:43:57 INFO Checking if agent identity type OnPrem can be assumed Apr 30 00:43:58.380600 amazon-ssm-agent[2154]: 2025-04-30 00:43:57 INFO Checking if agent identity type EC2 can be assumed Apr 30 00:43:58.482680 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO Agent will take identity from EC2 Apr 30 00:43:58.585701 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:58.686437 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:58.785775 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:58.885275 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 00:43:59.001507 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 30 00:43:59.006425 tar[2002]: linux-arm64/LICENSE Apr 30 00:43:59.006425 tar[2002]: linux-arm64/README.md Apr 30 00:43:59.052073 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:43:59.098437 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 00:43:59.197055 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 00:43:59.298721 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [Registrar] Starting registrar module Apr 30 00:43:59.399502 amazon-ssm-agent[2154]: 2025-04-30 00:43:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 00:43:59.676806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:43:59.691980 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:43:59.707289 ntpd[1988]: Listen normally on 6 eth0 [fe80::440:1eff:fef3:24f1%2]:123 Apr 30 00:43:59.710434 ntpd[1988]: 30 Apr 00:43:59 ntpd[1988]: Listen normally on 6 eth0 [fe80::440:1eff:fef3:24f1%2]:123 Apr 30 00:43:59.951941 amazon-ssm-agent[2154]: 2025-04-30 00:43:59 INFO [EC2Identity] EC2 registration was successful. Apr 30 00:43:59.988106 amazon-ssm-agent[2154]: 2025-04-30 00:43:59 INFO [CredentialRefresher] credentialRefresher has started Apr 30 00:43:59.988106 amazon-ssm-agent[2154]: 2025-04-30 00:43:59 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 00:43:59.988106 amazon-ssm-agent[2154]: 2025-04-30 00:43:59 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 00:44:00.044227 sshd_keygen[2029]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:44:00.051552 amazon-ssm-agent[2154]: 2025-04-30 00:43:59 INFO [CredentialRefresher] Next credential rotation will be in 30.666658061066666 minutes Apr 30 00:44:00.094129 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:44:00.106066 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:44:00.122093 systemd[1]: Started sshd@0-172.31.24.0:22-147.75.109.163:32780.service - OpenSSH per-connection server daemon (147.75.109.163:32780). Apr 30 00:44:00.155533 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:44:00.156131 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:44:00.174271 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:44:00.217201 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:44:00.230344 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:44:00.243099 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:44:00.246517 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:44:00.248856 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:44:00.251477 systemd[1]: Startup finished in 1.239s (kernel) + 13.211s (initrd) + 9.281s (userspace) = 23.732s. Apr 30 00:44:00.431947 sshd[2234]: Accepted publickey for core from 147.75.109.163 port 32780 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:00.436555 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:00.466966 systemd-logind[1995]: New session 1 of user core. Apr 30 00:44:00.472978 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:44:00.480986 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:44:00.525451 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:44:00.533985 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:44:00.559942 (systemd)[2251]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:44:00.791323 kubelet[2219]: E0430 00:44:00.791141 2219 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:00.796019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:00.796378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:00.798164 systemd[1]: kubelet.service: Consumed 1.382s CPU time. Apr 30 00:44:00.807849 systemd[2251]: Queued start job for default target default.target. Apr 30 00:44:00.822732 systemd[2251]: Created slice app.slice - User Application Slice. Apr 30 00:44:00.822801 systemd[2251]: Reached target paths.target - Paths. Apr 30 00:44:00.822836 systemd[2251]: Reached target timers.target - Timers. Apr 30 00:44:00.825614 systemd[2251]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:44:00.846814 systemd[2251]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:44:00.847052 systemd[2251]: Reached target sockets.target - Sockets. Apr 30 00:44:00.847084 systemd[2251]: Reached target basic.target - Basic System. Apr 30 00:44:00.847194 systemd[2251]: Reached target default.target - Main User Target. Apr 30 00:44:00.847261 systemd[2251]: Startup finished in 272ms. Apr 30 00:44:00.848341 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:44:00.860667 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:44:01.018676 amazon-ssm-agent[2154]: 2025-04-30 00:44:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 00:44:01.089139 systemd[1]: Started sshd@1-172.31.24.0:22-147.75.109.163:32792.service - OpenSSH per-connection server daemon (147.75.109.163:32792). Apr 30 00:44:01.119870 amazon-ssm-agent[2154]: 2025-04-30 00:44:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2263) started Apr 30 00:44:01.220888 amazon-ssm-agent[2154]: 2025-04-30 00:44:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 00:44:01.354517 sshd[2268]: Accepted publickey for core from 147.75.109.163 port 32792 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:01.357982 sshd[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:01.367669 systemd-logind[1995]: New session 2 of user core. Apr 30 00:44:01.379702 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:44:01.553523 sshd[2268]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:01.565691 systemd[1]: sshd@1-172.31.24.0:22-147.75.109.163:32792.service: Deactivated successfully. Apr 30 00:44:01.575229 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:44:01.581355 systemd-logind[1995]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:44:01.584558 systemd-logind[1995]: Removed session 2. Apr 30 00:44:01.611943 systemd[1]: Started sshd@2-172.31.24.0:22-147.75.109.163:32808.service - OpenSSH per-connection server daemon (147.75.109.163:32808). Apr 30 00:44:01.888126 sshd[2281]: Accepted publickey for core from 147.75.109.163 port 32808 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:01.891178 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:01.901820 systemd-logind[1995]: New session 3 of user core. Apr 30 00:44:01.909733 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:44:02.080988 sshd[2281]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:02.089198 systemd[1]: sshd@2-172.31.24.0:22-147.75.109.163:32808.service: Deactivated successfully. Apr 30 00:44:02.093555 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:44:02.096608 systemd-logind[1995]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:44:02.099931 systemd-logind[1995]: Removed session 3. Apr 30 00:44:02.138053 systemd[1]: Started sshd@3-172.31.24.0:22-147.75.109.163:32812.service - OpenSSH per-connection server daemon (147.75.109.163:32812). Apr 30 00:44:02.400900 sshd[2288]: Accepted publickey for core from 147.75.109.163 port 32812 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:02.404235 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:02.414899 systemd-logind[1995]: New session 4 of user core. Apr 30 00:44:02.422742 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:44:02.597059 sshd[2288]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:02.605350 systemd[1]: sshd@3-172.31.24.0:22-147.75.109.163:32812.service: Deactivated successfully. Apr 30 00:44:02.611135 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:44:02.613300 systemd-logind[1995]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:44:02.615353 systemd-logind[1995]: Removed session 4. Apr 30 00:44:02.647711 systemd[1]: Started sshd@4-172.31.24.0:22-147.75.109.163:32818.service - OpenSSH per-connection server daemon (147.75.109.163:32818). Apr 30 00:44:02.917009 sshd[2295]: Accepted publickey for core from 147.75.109.163 port 32818 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:02.919661 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:02.927768 systemd-logind[1995]: New session 5 of user core. Apr 30 00:44:02.936652 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:44:03.092067 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:44:03.093304 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:03.110755 sudo[2298]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:03.150131 sshd[2295]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:03.157098 systemd-logind[1995]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:44:03.158949 systemd[1]: sshd@4-172.31.24.0:22-147.75.109.163:32818.service: Deactivated successfully. Apr 30 00:44:03.163850 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:44:03.168638 systemd-logind[1995]: Removed session 5. Apr 30 00:44:03.210635 systemd[1]: Started sshd@5-172.31.24.0:22-147.75.109.163:32832.service - OpenSSH per-connection server daemon (147.75.109.163:32832). Apr 30 00:44:03.483351 sshd[2303]: Accepted publickey for core from 147.75.109.163 port 32832 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:03.486247 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:03.495571 systemd-logind[1995]: New session 6 of user core. Apr 30 00:44:03.506726 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:44:03.645676 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:44:03.647196 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:03.655242 sudo[2307]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:03.666367 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:44:03.667155 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:03.699812 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:44:03.702677 auditctl[2310]: No rules Apr 30 00:44:03.703542 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:44:03.703955 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:44:04.177139 systemd-resolved[1807]: Clock change detected. Flushing caches. Apr 30 00:44:04.188635 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:44:04.239447 augenrules[2328]: No rules Apr 30 00:44:04.243658 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:44:04.246444 sudo[2306]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:04.285105 sshd[2303]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:04.291573 systemd[1]: sshd@5-172.31.24.0:22-147.75.109.163:32832.service: Deactivated successfully. Apr 30 00:44:04.294836 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:44:04.298637 systemd-logind[1995]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:44:04.300771 systemd-logind[1995]: Removed session 6. Apr 30 00:44:04.339088 systemd[1]: Started sshd@6-172.31.24.0:22-147.75.109.163:32838.service - OpenSSH per-connection server daemon (147.75.109.163:32838). Apr 30 00:44:04.606882 sshd[2336]: Accepted publickey for core from 147.75.109.163 port 32838 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:04.609932 sshd[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:04.618919 systemd-logind[1995]: New session 7 of user core. Apr 30 00:44:04.629843 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:44:04.769774 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:44:04.770501 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:05.230021 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:44:05.242016 (dockerd)[2355]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:44:05.610785 dockerd[2355]: time="2025-04-30T00:44:05.610689111Z" level=info msg="Starting up" Apr 30 00:44:05.786237 dockerd[2355]: time="2025-04-30T00:44:05.785554839Z" level=info msg="Loading containers: start." Apr 30 00:44:05.960624 kernel: Initializing XFRM netlink socket Apr 30 00:44:05.996668 (udev-worker)[2378]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:44:06.095883 systemd-networkd[1846]: docker0: Link UP Apr 30 00:44:06.124558 dockerd[2355]: time="2025-04-30T00:44:06.124457137Z" level=info msg="Loading containers: done." Apr 30 00:44:06.149971 dockerd[2355]: time="2025-04-30T00:44:06.149891233Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:44:06.150351 dockerd[2355]: time="2025-04-30T00:44:06.150052321Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:44:06.150351 dockerd[2355]: time="2025-04-30T00:44:06.150262057Z" level=info msg="Daemon has completed initialization" Apr 30 00:44:06.205110 dockerd[2355]: time="2025-04-30T00:44:06.204372614Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:44:06.204500 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:44:07.397985 containerd[2021]: time="2025-04-30T00:44:07.397826800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:44:08.043954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3207167494.mount: Deactivated successfully. Apr 30 00:44:09.574147 containerd[2021]: time="2025-04-30T00:44:09.574066794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:09.576256 containerd[2021]: time="2025-04-30T00:44:09.576172386Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" Apr 30 00:44:09.577061 containerd[2021]: time="2025-04-30T00:44:09.576973374Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:09.582823 containerd[2021]: time="2025-04-30T00:44:09.582710970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:09.585556 containerd[2021]: time="2025-04-30T00:44:09.585243006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.18734777s" Apr 30 00:44:09.585556 containerd[2021]: time="2025-04-30T00:44:09.585306102Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 00:44:09.627840 containerd[2021]: time="2025-04-30T00:44:09.627789979Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:44:11.207561 containerd[2021]: time="2025-04-30T00:44:11.207343710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.209568 containerd[2021]: time="2025-04-30T00:44:11.209486358Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" Apr 30 00:44:11.210550 containerd[2021]: time="2025-04-30T00:44:11.210070734Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.215840 containerd[2021]: time="2025-04-30T00:44:11.215731206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.218560 containerd[2021]: time="2025-04-30T00:44:11.218145966Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.590115843s" Apr 30 00:44:11.218560 containerd[2021]: time="2025-04-30T00:44:11.218209590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 00:44:11.261405 containerd[2021]: time="2025-04-30T00:44:11.261335491Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:44:11.277475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:44:11.284425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:11.604619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:11.631111 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:11.724929 kubelet[2573]: E0430 00:44:11.724808 2573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:11.733006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:11.733389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:12.385162 containerd[2021]: time="2025-04-30T00:44:12.385106720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.387543 containerd[2021]: time="2025-04-30T00:44:12.387473660Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" Apr 30 00:44:12.388943 containerd[2021]: time="2025-04-30T00:44:12.388861496Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.395173 containerd[2021]: time="2025-04-30T00:44:12.395086280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.397649 containerd[2021]: time="2025-04-30T00:44:12.397425776Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.136024141s" Apr 30 00:44:12.397649 containerd[2021]: time="2025-04-30T00:44:12.397487384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 00:44:12.435986 containerd[2021]: time="2025-04-30T00:44:12.435900009Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:44:13.751628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428610961.mount: Deactivated successfully. Apr 30 00:44:14.240502 containerd[2021]: time="2025-04-30T00:44:14.240427077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:14.242636 containerd[2021]: time="2025-04-30T00:44:14.242545941Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" Apr 30 00:44:14.243885 containerd[2021]: time="2025-04-30T00:44:14.243807058Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:14.247881 containerd[2021]: time="2025-04-30T00:44:14.247774438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:14.249612 containerd[2021]: time="2025-04-30T00:44:14.249175150Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.813213893s" Apr 30 00:44:14.249612 containerd[2021]: time="2025-04-30T00:44:14.249233170Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:44:14.286952 containerd[2021]: time="2025-04-30T00:44:14.286880554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:44:14.860671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656361529.mount: Deactivated successfully. Apr 30 00:44:15.951575 containerd[2021]: time="2025-04-30T00:44:15.950934602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:15.953452 containerd[2021]: time="2025-04-30T00:44:15.953374982Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Apr 30 00:44:15.955583 containerd[2021]: time="2025-04-30T00:44:15.954621326Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:15.960961 containerd[2021]: time="2025-04-30T00:44:15.960877874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:15.963595 containerd[2021]: time="2025-04-30T00:44:15.963503438Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.676554112s" Apr 30 00:44:15.963826 containerd[2021]: time="2025-04-30T00:44:15.963792854Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:44:16.004020 containerd[2021]: time="2025-04-30T00:44:16.003962854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:44:16.495416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004079421.mount: Deactivated successfully. Apr 30 00:44:16.504296 containerd[2021]: time="2025-04-30T00:44:16.504134101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.505900 containerd[2021]: time="2025-04-30T00:44:16.505817533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Apr 30 00:44:16.506085 containerd[2021]: time="2025-04-30T00:44:16.506033557Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.511978 containerd[2021]: time="2025-04-30T00:44:16.511852297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.515626 containerd[2021]: time="2025-04-30T00:44:16.513734185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 509.475291ms" Apr 30 00:44:16.515626 containerd[2021]: time="2025-04-30T00:44:16.513834109Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 00:44:16.554162 containerd[2021]: time="2025-04-30T00:44:16.554095645Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:44:17.077981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974195718.mount: Deactivated successfully. Apr 30 00:44:19.263250 containerd[2021]: time="2025-04-30T00:44:19.263164526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:19.266332 containerd[2021]: time="2025-04-30T00:44:19.265776230Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Apr 30 00:44:19.267444 containerd[2021]: time="2025-04-30T00:44:19.267345014Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:19.273971 containerd[2021]: time="2025-04-30T00:44:19.273843686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:19.276901 containerd[2021]: time="2025-04-30T00:44:19.276687903Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.722114334s" Apr 30 00:44:19.277242 containerd[2021]: time="2025-04-30T00:44:19.277084179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 00:44:21.777484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:44:21.785849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:22.131987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:22.147039 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:22.237446 kubelet[2768]: E0430 00:44:22.235944 2768 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:22.243842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:22.244224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:28.456680 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 00:44:28.689116 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:28.698044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:28.743112 systemd[1]: Reloading requested from client PID 2785 ('systemctl') (unit session-7.scope)... Apr 30 00:44:28.743151 systemd[1]: Reloading... Apr 30 00:44:28.980559 zram_generator::config[2828]: No configuration found. Apr 30 00:44:29.312738 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:44:29.519010 systemd[1]: Reloading finished in 775 ms. Apr 30 00:44:29.619300 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:44:29.619554 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:44:29.620351 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:29.631177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:30.803013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:30.818469 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:44:30.923588 kubelet[2887]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:30.923588 kubelet[2887]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:44:30.923588 kubelet[2887]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:30.926656 kubelet[2887]: I0430 00:44:30.926489 2887 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:44:31.957163 kubelet[2887]: I0430 00:44:31.957097 2887 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:44:31.957163 kubelet[2887]: I0430 00:44:31.957146 2887 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:44:31.957846 kubelet[2887]: I0430 00:44:31.957476 2887 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:44:31.983343 kubelet[2887]: I0430 00:44:31.983249 2887 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:44:31.984539 kubelet[2887]: E0430 00:44:31.984363 2887 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.0:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.004922 kubelet[2887]: I0430 00:44:32.004880 2887 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:44:32.008587 kubelet[2887]: I0430 00:44:32.007776 2887 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:44:32.008587 kubelet[2887]: I0430 00:44:32.007884 2887 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:44:32.008587 kubelet[2887]: I0430 00:44:32.008334 2887 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:44:32.008587 kubelet[2887]: I0430 00:44:32.008359 2887 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:44:32.009008 kubelet[2887]: I0430 00:44:32.008717 2887 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:32.010341 kubelet[2887]: I0430 00:44:32.010279 2887 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:44:32.010341 kubelet[2887]: I0430 00:44:32.010335 2887 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:44:32.010579 kubelet[2887]: I0430 00:44:32.010488 2887 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:44:32.012070 kubelet[2887]: I0430 00:44:32.010654 2887 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:44:32.012850 kubelet[2887]: W0430 00:44:32.012758 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.0:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.013052 kubelet[2887]: E0430 00:44:32.013026 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.0:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.013356 kubelet[2887]: I0430 00:44:32.013327 2887 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:44:32.013868 kubelet[2887]: I0430 00:44:32.013836 2887 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:44:32.014087 kubelet[2887]: W0430 00:44:32.014066 2887 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:44:32.015549 kubelet[2887]: I0430 00:44:32.015491 2887 server.go:1264] "Started kubelet" Apr 30 00:44:32.016872 kubelet[2887]: W0430 00:44:32.015858 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-0&limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.016872 kubelet[2887]: E0430 00:44:32.015944 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-0&limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.026238 kubelet[2887]: E0430 00:44:32.024878 2887 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.0:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.0:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-0.183af1fc6c9839ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-0,UID:ip-172-31-24-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-0,},FirstTimestamp:2025-04-30 00:44:32.015456686 +0000 UTC m=+1.187250055,LastTimestamp:2025-04-30 00:44:32.015456686 +0000 UTC m=+1.187250055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-0,}" Apr 30 00:44:32.026715 kubelet[2887]: I0430 00:44:32.026677 2887 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:44:32.034623 kubelet[2887]: I0430 00:44:32.034476 2887 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:44:32.038873 kubelet[2887]: I0430 00:44:32.038683 2887 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:44:32.040274 kubelet[2887]: I0430 00:44:32.040097 2887 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:44:32.041211 kubelet[2887]: I0430 00:44:32.040951 2887 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:44:32.041627 kubelet[2887]: I0430 00:44:32.041392 2887 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:44:32.044809 kubelet[2887]: W0430 00:44:32.043453 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.044809 kubelet[2887]: E0430 00:44:32.043643 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.045338 kubelet[2887]: E0430 00:44:32.045231 2887 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:44:32.045498 kubelet[2887]: I0430 00:44:32.045380 2887 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:44:32.048107 kubelet[2887]: I0430 00:44:32.047978 2887 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:44:32.049433 kubelet[2887]: E0430 00:44:32.049282 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-0?timeout=10s\": dial tcp 172.31.24.0:6443: connect: connection refused" interval="200ms" Apr 30 00:44:32.050641 kubelet[2887]: I0430 00:44:32.050377 2887 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:44:32.050641 kubelet[2887]: I0430 00:44:32.050426 2887 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:44:32.051553 kubelet[2887]: I0430 00:44:32.051208 2887 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:44:32.091362 kubelet[2887]: I0430 00:44:32.090822 2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:44:32.095232 kubelet[2887]: I0430 00:44:32.094358 2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:44:32.095232 kubelet[2887]: I0430 00:44:32.094489 2887 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:44:32.095232 kubelet[2887]: I0430 00:44:32.094570 2887 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:44:32.095232 kubelet[2887]: E0430 00:44:32.094658 2887 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:44:32.108247 kubelet[2887]: W0430 00:44:32.108011 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.108247 kubelet[2887]: E0430 00:44:32.108164 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.110566 kubelet[2887]: I0430 00:44:32.109945 2887 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:44:32.110566 kubelet[2887]: I0430 00:44:32.110036 2887 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:44:32.110566 kubelet[2887]: I0430 00:44:32.110086 2887 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:32.113784 kubelet[2887]: I0430 00:44:32.113725 2887 policy_none.go:49] "None policy: Start" Apr 30 00:44:32.118493 kubelet[2887]: I0430 00:44:32.117696 2887 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:44:32.118493 kubelet[2887]: I0430 00:44:32.117749 2887 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:44:32.134295 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 00:44:32.147558 kubelet[2887]: I0430 00:44:32.146429 2887 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-0" Apr 30 00:44:32.147760 kubelet[2887]: E0430 00:44:32.147650 2887 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.0:6443/api/v1/nodes\": dial tcp 172.31.24.0:6443: connect: connection refused" node="ip-172-31-24-0" Apr 30 00:44:32.155415 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 00:44:32.163085 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 00:44:32.174786 kubelet[2887]: I0430 00:44:32.174728 2887 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:44:32.175655 kubelet[2887]: I0430 00:44:32.175061 2887 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:44:32.175655 kubelet[2887]: I0430 00:44:32.175272 2887 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:44:32.180068 kubelet[2887]: E0430 00:44:32.179979 2887 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-0\" not found" Apr 30 00:44:32.194939 kubelet[2887]: I0430 00:44:32.194839 2887 topology_manager.go:215] "Topology Admit Handler" podUID="95adc66962105b3829c629e5180779b1" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-0" Apr 30 00:44:32.198164 kubelet[2887]: I0430 00:44:32.197189 2887 topology_manager.go:215] "Topology Admit Handler" podUID="88950e37813d5845367c7df738dc9590" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:32.200702 kubelet[2887]: I0430 00:44:32.199350 2887 topology_manager.go:215] "Topology Admit Handler" podUID="5ea04fbed95f83dd83d3f8776b122a53" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-0" Apr 30 00:44:32.218269 systemd[1]: Created slice kubepods-burstable-pod88950e37813d5845367c7df738dc9590.slice - libcontainer container kubepods-burstable-pod88950e37813d5845367c7df738dc9590.slice. Apr 30 00:44:32.244410 systemd[1]: Created slice kubepods-burstable-pod95adc66962105b3829c629e5180779b1.slice - libcontainer container kubepods-burstable-pod95adc66962105b3829c629e5180779b1.slice. Apr 30 00:44:32.250030 kubelet[2887]: I0430 00:44:32.249903 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95adc66962105b3829c629e5180779b1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-0\" (UID: \"95adc66962105b3829c629e5180779b1\") " pod="kube-system/kube-apiserver-ip-172-31-24-0" Apr 30 00:44:32.250030 kubelet[2887]: I0430 00:44:32.250013 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:32.250382 kubelet[2887]: I0430 00:44:32.250076 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:32.250382 kubelet[2887]: I0430 00:44:32.250125 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:32.250382 kubelet[2887]: I0430 00:44:32.250185 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ea04fbed95f83dd83d3f8776b122a53-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-0\" (UID: \"5ea04fbed95f83dd83d3f8776b122a53\") " pod="kube-system/kube-scheduler-ip-172-31-24-0" Apr 30 00:44:32.250382 kubelet[2887]: I0430 00:44:32.250230 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95adc66962105b3829c629e5180779b1-ca-certs\") pod \"kube-apiserver-ip-172-31-24-0\" (UID: \"95adc66962105b3829c629e5180779b1\") " pod="kube-system/kube-apiserver-ip-172-31-24-0" Apr 30 00:44:32.250382 kubelet[2887]: I0430 00:44:32.250266 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95adc66962105b3829c629e5180779b1-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-0\" (UID: \"95adc66962105b3829c629e5180779b1\") " pod="kube-system/kube-apiserver-ip-172-31-24-0" Apr 30 00:44:32.251248 kubelet[2887]: I0430 00:44:32.250304 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:32.251248 kubelet[2887]: I0430 00:44:32.250346 2887 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:32.251248 kubelet[2887]: E0430 00:44:32.250930 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-0?timeout=10s\": dial tcp 172.31.24.0:6443: connect: connection refused" interval="400ms" Apr 30 00:44:32.264467 systemd[1]: Created slice kubepods-burstable-pod5ea04fbed95f83dd83d3f8776b122a53.slice - libcontainer container kubepods-burstable-pod5ea04fbed95f83dd83d3f8776b122a53.slice. Apr 30 00:44:32.351451 kubelet[2887]: I0430 00:44:32.351405 2887 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-0" Apr 30 00:44:32.352019 kubelet[2887]: E0430 00:44:32.351959 2887 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.0:6443/api/v1/nodes\": dial tcp 172.31.24.0:6443: connect: connection refused" node="ip-172-31-24-0" Apr 30 00:44:32.538183 containerd[2021]: time="2025-04-30T00:44:32.538007416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-0,Uid:88950e37813d5845367c7df738dc9590,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:32.559645 containerd[2021]: time="2025-04-30T00:44:32.559491784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-0,Uid:95adc66962105b3829c629e5180779b1,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:32.571856 containerd[2021]: time="2025-04-30T00:44:32.571774217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-0,Uid:5ea04fbed95f83dd83d3f8776b122a53,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:32.652124 kubelet[2887]: E0430 00:44:32.651968 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-0?timeout=10s\": dial tcp 172.31.24.0:6443: connect: connection refused" interval="800ms" Apr 30 00:44:32.754839 kubelet[2887]: I0430 00:44:32.754772 2887 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-0" Apr 30 00:44:32.755354 kubelet[2887]: E0430 00:44:32.755280 2887 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.0:6443/api/v1/nodes\": dial tcp 172.31.24.0:6443: connect: connection refused" node="ip-172-31-24-0" Apr 30 00:44:32.976949 kubelet[2887]: W0430 00:44:32.976676 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.0:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:32.976949 kubelet[2887]: E0430 00:44:32.976787 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.0:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:33.000828 kubelet[2887]: W0430 00:44:33.000698 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-0&limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:33.000828 kubelet[2887]: E0430 00:44:33.000795 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.0:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-0&limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:33.045925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138697624.mount: Deactivated successfully. Apr 30 00:44:33.055573 containerd[2021]: time="2025-04-30T00:44:33.055455303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:33.057416 containerd[2021]: time="2025-04-30T00:44:33.057350319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 00:44:33.058910 containerd[2021]: time="2025-04-30T00:44:33.058726287Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:33.061242 containerd[2021]: time="2025-04-30T00:44:33.061022175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:44:33.063397 containerd[2021]: time="2025-04-30T00:44:33.063212619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:44:33.065341 containerd[2021]: time="2025-04-30T00:44:33.065172435Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:33.070574 containerd[2021]: time="2025-04-30T00:44:33.068756247Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:33.073670 containerd[2021]: time="2025-04-30T00:44:33.073606635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:33.076783 containerd[2021]: time="2025-04-30T00:44:33.076671123Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.524711ms" Apr 30 00:44:33.088026 containerd[2021]: time="2025-04-30T00:44:33.087956643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.039806ms" Apr 30 00:44:33.118016 containerd[2021]: time="2025-04-30T00:44:33.117958839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.997327ms" Apr 30 00:44:33.273808 containerd[2021]: time="2025-04-30T00:44:33.273503524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:33.276744 containerd[2021]: time="2025-04-30T00:44:33.273634972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:33.276744 containerd[2021]: time="2025-04-30T00:44:33.273795760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:33.276744 containerd[2021]: time="2025-04-30T00:44:33.274235116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:33.280448 containerd[2021]: time="2025-04-30T00:44:33.279663124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:33.280448 containerd[2021]: time="2025-04-30T00:44:33.279785356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:33.280448 containerd[2021]: time="2025-04-30T00:44:33.279831184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:33.280448 containerd[2021]: time="2025-04-30T00:44:33.280018372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:33.280448 containerd[2021]: time="2025-04-30T00:44:33.279835360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:33.280448 containerd[2021]: time="2025-04-30T00:44:33.279923308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:33.280448 containerd[2021]: time="2025-04-30T00:44:33.279949312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:33.280448 containerd[2021]: time="2025-04-30T00:44:33.280109812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:33.331921 systemd[1]: Started cri-containerd-81fa4391c5066e9de2074092e6ba9a49d8ed5ac1388a8c0d1ee96121e0d6c8a3.scope - libcontainer container 81fa4391c5066e9de2074092e6ba9a49d8ed5ac1388a8c0d1ee96121e0d6c8a3. Apr 30 00:44:33.355984 systemd[1]: Started cri-containerd-e00b6500558e8ebf52969a72b5c8c8d6bed94c225c248996d9a357266bc11261.scope - libcontainer container e00b6500558e8ebf52969a72b5c8c8d6bed94c225c248996d9a357266bc11261. Apr 30 00:44:33.371872 systemd[1]: Started cri-containerd-02c8542774ca63c0ae8b54c0ecd1c4e7dac81b8256f333c93c50fcbe1f602aaa.scope - libcontainer container 02c8542774ca63c0ae8b54c0ecd1c4e7dac81b8256f333c93c50fcbe1f602aaa. Apr 30 00:44:33.455177 kubelet[2887]: E0430 00:44:33.453145 2887 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-0?timeout=10s\": dial tcp 172.31.24.0:6443: connect: connection refused" interval="1.6s" Apr 30 00:44:33.478867 containerd[2021]: time="2025-04-30T00:44:33.478786361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-0,Uid:5ea04fbed95f83dd83d3f8776b122a53,Namespace:kube-system,Attempt:0,} returns sandbox id \"e00b6500558e8ebf52969a72b5c8c8d6bed94c225c248996d9a357266bc11261\"" Apr 30 00:44:33.494863 containerd[2021]: time="2025-04-30T00:44:33.494776457Z" level=info msg="CreateContainer within sandbox \"e00b6500558e8ebf52969a72b5c8c8d6bed94c225c248996d9a357266bc11261\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:44:33.505446 containerd[2021]: time="2025-04-30T00:44:33.505272449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-0,Uid:95adc66962105b3829c629e5180779b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"81fa4391c5066e9de2074092e6ba9a49d8ed5ac1388a8c0d1ee96121e0d6c8a3\"" Apr 30 00:44:33.512273 containerd[2021]: time="2025-04-30T00:44:33.511987901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-0,Uid:88950e37813d5845367c7df738dc9590,Namespace:kube-system,Attempt:0,} returns sandbox id \"02c8542774ca63c0ae8b54c0ecd1c4e7dac81b8256f333c93c50fcbe1f602aaa\"" Apr 30 00:44:33.518027 containerd[2021]: time="2025-04-30T00:44:33.517614113Z" level=info msg="CreateContainer within sandbox \"81fa4391c5066e9de2074092e6ba9a49d8ed5ac1388a8c0d1ee96121e0d6c8a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:44:33.522586 containerd[2021]: time="2025-04-30T00:44:33.522403001Z" level=info msg="CreateContainer within sandbox \"02c8542774ca63c0ae8b54c0ecd1c4e7dac81b8256f333c93c50fcbe1f602aaa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:44:33.542716 containerd[2021]: time="2025-04-30T00:44:33.542616833Z" level=info msg="CreateContainer within sandbox \"e00b6500558e8ebf52969a72b5c8c8d6bed94c225c248996d9a357266bc11261\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125\"" Apr 30 00:44:33.545113 containerd[2021]: time="2025-04-30T00:44:33.545038037Z" level=info msg="StartContainer for \"ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125\"" Apr 30 00:44:33.547870 containerd[2021]: time="2025-04-30T00:44:33.547652993Z" level=info msg="CreateContainer within sandbox \"81fa4391c5066e9de2074092e6ba9a49d8ed5ac1388a8c0d1ee96121e0d6c8a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eba0a51458d407cc7507b36cb131ae602d5217f660aebfc14a9474f23fb91f9b\"" Apr 30 00:44:33.549503 containerd[2021]: time="2025-04-30T00:44:33.549421097Z" level=info msg="StartContainer for \"eba0a51458d407cc7507b36cb131ae602d5217f660aebfc14a9474f23fb91f9b\"" Apr 30 00:44:33.561110 kubelet[2887]: I0430 00:44:33.560682 2887 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-0" Apr 30 00:44:33.561874 kubelet[2887]: E0430 00:44:33.561550 2887 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.0:6443/api/v1/nodes\": dial tcp 172.31.24.0:6443: connect: connection refused" node="ip-172-31-24-0" Apr 30 00:44:33.570707 containerd[2021]: time="2025-04-30T00:44:33.570636342Z" level=info msg="CreateContainer within sandbox \"02c8542774ca63c0ae8b54c0ecd1c4e7dac81b8256f333c93c50fcbe1f602aaa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f\"" Apr 30 00:44:33.571768 containerd[2021]: time="2025-04-30T00:44:33.571704390Z" level=info msg="StartContainer for \"470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f\"" Apr 30 00:44:33.617547 kubelet[2887]: W0430 00:44:33.617471 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:33.617704 kubelet[2887]: E0430 00:44:33.617579 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.0:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:33.620602 systemd[1]: Started cri-containerd-ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125.scope - libcontainer container ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125. Apr 30 00:44:33.623665 kubelet[2887]: W0430 00:44:33.622644 2887 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:33.623665 kubelet[2887]: E0430 00:44:33.622737 2887 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.0:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:33.652633 systemd[1]: Started cri-containerd-eba0a51458d407cc7507b36cb131ae602d5217f660aebfc14a9474f23fb91f9b.scope - libcontainer container eba0a51458d407cc7507b36cb131ae602d5217f660aebfc14a9474f23fb91f9b. Apr 30 00:44:33.684784 systemd[1]: Started cri-containerd-470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f.scope - libcontainer container 470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f. Apr 30 00:44:33.775419 containerd[2021]: time="2025-04-30T00:44:33.775226851Z" level=info msg="StartContainer for \"ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125\" returns successfully" Apr 30 00:44:33.802861 containerd[2021]: time="2025-04-30T00:44:33.802523479Z" level=info msg="StartContainer for \"eba0a51458d407cc7507b36cb131ae602d5217f660aebfc14a9474f23fb91f9b\" returns successfully" Apr 30 00:44:33.827852 containerd[2021]: time="2025-04-30T00:44:33.827759203Z" level=info msg="StartContainer for \"470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f\" returns successfully" Apr 30 00:44:33.994113 kubelet[2887]: E0430 00:44:33.994044 2887 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.0:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.0:6443: connect: connection refused Apr 30 00:44:35.168641 kubelet[2887]: I0430 00:44:35.167639 2887 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-0" Apr 30 00:44:38.017546 kubelet[2887]: I0430 00:44:38.015477 2887 apiserver.go:52] "Watching apiserver" Apr 30 00:44:38.146121 kubelet[2887]: I0430 00:44:38.146028 2887 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:44:38.165546 kubelet[2887]: E0430 00:44:38.165331 2887 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-0\" not found" node="ip-172-31-24-0" Apr 30 00:44:38.169858 kubelet[2887]: I0430 00:44:38.169599 2887 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-0" Apr 30 00:44:40.792972 systemd[1]: Reloading requested from client PID 3166 ('systemctl') (unit session-7.scope)... Apr 30 00:44:40.793022 systemd[1]: Reloading... Apr 30 00:44:40.993584 zram_generator::config[3215]: No configuration found. Apr 30 00:44:41.255185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:44:41.474206 systemd[1]: Reloading finished in 680 ms. Apr 30 00:44:41.575359 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:41.591598 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:44:41.592250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:41.592499 systemd[1]: kubelet.service: Consumed 2.074s CPU time, 115.0M memory peak, 0B memory swap peak. Apr 30 00:44:41.603171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:41.939847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:41.961258 (kubelet)[3266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:44:42.084651 kubelet[3266]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:42.084651 kubelet[3266]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:44:42.084651 kubelet[3266]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:42.085397 kubelet[3266]: I0430 00:44:42.084911 3266 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:44:42.097490 kubelet[3266]: I0430 00:44:42.097158 3266 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:44:42.097490 kubelet[3266]: I0430 00:44:42.097567 3266 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:44:42.098600 kubelet[3266]: I0430 00:44:42.098233 3266 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:44:42.101482 kubelet[3266]: I0430 00:44:42.101437 3266 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:44:42.105080 kubelet[3266]: I0430 00:44:42.105007 3266 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:44:42.124844 kubelet[3266]: I0430 00:44:42.124776 3266 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:44:42.125872 kubelet[3266]: I0430 00:44:42.125771 3266 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:44:42.126702 kubelet[3266]: I0430 00:44:42.125834 3266 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:44:42.127028 kubelet[3266]: I0430 00:44:42.126995 3266 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:44:42.127164 kubelet[3266]: I0430 00:44:42.127144 3266 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:44:42.127428 kubelet[3266]: I0430 00:44:42.127404 3266 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:42.127782 kubelet[3266]: I0430 00:44:42.127749 3266 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:44:42.128113 kubelet[3266]: I0430 00:44:42.127939 3266 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:44:42.128113 kubelet[3266]: I0430 00:44:42.128036 3266 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:44:42.128565 kubelet[3266]: I0430 00:44:42.128268 3266 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:44:42.138092 sudo[3280]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:44:42.140927 sudo[3280]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:44:42.142548 kubelet[3266]: I0430 00:44:42.142396 3266 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:44:42.145124 kubelet[3266]: I0430 00:44:42.145074 3266 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:44:42.151554 kubelet[3266]: I0430 00:44:42.149194 3266 server.go:1264] "Started kubelet" Apr 30 00:44:42.151554 kubelet[3266]: I0430 00:44:42.150412 3266 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:44:42.151554 kubelet[3266]: I0430 00:44:42.151293 3266 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:44:42.154030 kubelet[3266]: I0430 00:44:42.153677 3266 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:44:42.154030 kubelet[3266]: I0430 00:44:42.153775 3266 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:44:42.156194 kubelet[3266]: I0430 00:44:42.155412 3266 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:44:42.178755 kubelet[3266]: I0430 00:44:42.178692 3266 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:44:42.180875 kubelet[3266]: I0430 00:44:42.180815 3266 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:44:42.181140 kubelet[3266]: I0430 00:44:42.181105 3266 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:44:42.187644 kubelet[3266]: I0430 00:44:42.186296 3266 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:44:42.187644 kubelet[3266]: I0430 00:44:42.186459 3266 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:44:42.200765 kubelet[3266]: E0430 00:44:42.198972 3266 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:44:42.207245 kubelet[3266]: I0430 00:44:42.206073 3266 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:44:42.208615 kubelet[3266]: I0430 00:44:42.207046 3266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:44:42.213558 kubelet[3266]: I0430 00:44:42.213254 3266 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:44:42.213558 kubelet[3266]: I0430 00:44:42.213330 3266 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:44:42.213558 kubelet[3266]: I0430 00:44:42.213367 3266 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:44:42.213558 kubelet[3266]: E0430 00:44:42.213435 3266 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:44:42.286364 kubelet[3266]: I0430 00:44:42.286309 3266 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-0" Apr 30 00:44:42.306474 kubelet[3266]: I0430 00:44:42.306414 3266 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-24-0" Apr 30 00:44:42.306666 kubelet[3266]: I0430 00:44:42.306553 3266 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-0" Apr 30 00:44:42.316286 kubelet[3266]: E0430 00:44:42.316095 3266 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:44:42.387598 kubelet[3266]: I0430 00:44:42.387239 3266 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:44:42.387598 kubelet[3266]: I0430 00:44:42.387271 3266 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:44:42.387598 kubelet[3266]: I0430 00:44:42.387307 3266 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:42.387896 kubelet[3266]: I0430 00:44:42.387622 3266 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:44:42.387896 kubelet[3266]: I0430 00:44:42.387644 3266 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:44:42.387896 kubelet[3266]: I0430 00:44:42.387681 3266 policy_none.go:49] "None policy: Start" Apr 30 00:44:42.390390 kubelet[3266]: I0430 00:44:42.390332 3266 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:44:42.391283 kubelet[3266]: I0430 00:44:42.390415 3266 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:44:42.391283 kubelet[3266]: I0430 00:44:42.390866 3266 state_mem.go:75] "Updated machine memory state" Apr 30 00:44:42.406950 kubelet[3266]: I0430 00:44:42.406882 3266 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:44:42.408046 kubelet[3266]: I0430 00:44:42.407247 3266 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:44:42.414548 kubelet[3266]: I0430 00:44:42.412776 3266 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:44:42.517343 kubelet[3266]: I0430 00:44:42.517173 3266 topology_manager.go:215] "Topology Admit Handler" podUID="95adc66962105b3829c629e5180779b1" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-0" Apr 30 00:44:42.517483 kubelet[3266]: I0430 00:44:42.517387 3266 topology_manager.go:215] "Topology Admit Handler" podUID="88950e37813d5845367c7df738dc9590" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:42.517483 kubelet[3266]: I0430 00:44:42.517467 3266 topology_manager.go:215] "Topology Admit Handler" podUID="5ea04fbed95f83dd83d3f8776b122a53" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-0" Apr 30 00:44:42.535224 kubelet[3266]: E0430 00:44:42.535098 3266 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-0\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-0" Apr 30 00:44:42.538335 kubelet[3266]: E0430 00:44:42.538228 3266 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-24-0\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:42.585010 kubelet[3266]: I0430 00:44:42.583984 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95adc66962105b3829c629e5180779b1-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-0\" (UID: \"95adc66962105b3829c629e5180779b1\") " pod="kube-system/kube-apiserver-ip-172-31-24-0" Apr 30 00:44:42.585010 kubelet[3266]: I0430 00:44:42.584104 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ea04fbed95f83dd83d3f8776b122a53-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-0\" (UID: \"5ea04fbed95f83dd83d3f8776b122a53\") " pod="kube-system/kube-scheduler-ip-172-31-24-0" Apr 30 00:44:42.585010 kubelet[3266]: I0430 00:44:42.584161 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:42.585010 kubelet[3266]: I0430 00:44:42.584200 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:42.585010 kubelet[3266]: I0430 00:44:42.584241 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:42.585448 kubelet[3266]: I0430 00:44:42.584279 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:42.585448 kubelet[3266]: I0430 00:44:42.584318 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95adc66962105b3829c629e5180779b1-ca-certs\") pod \"kube-apiserver-ip-172-31-24-0\" (UID: \"95adc66962105b3829c629e5180779b1\") " pod="kube-system/kube-apiserver-ip-172-31-24-0" Apr 30 00:44:42.585448 kubelet[3266]: I0430 00:44:42.584390 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95adc66962105b3829c629e5180779b1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-0\" (UID: \"95adc66962105b3829c629e5180779b1\") " pod="kube-system/kube-apiserver-ip-172-31-24-0" Apr 30 00:44:42.585448 kubelet[3266]: I0430 00:44:42.584434 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88950e37813d5845367c7df738dc9590-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-0\" (UID: \"88950e37813d5845367c7df738dc9590\") " pod="kube-system/kube-controller-manager-ip-172-31-24-0" Apr 30 00:44:42.989658 update_engine[1996]: I20250430 00:44:42.989567 1996 update_attempter.cc:509] Updating boot flags... Apr 30 00:44:43.133097 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3316) Apr 30 00:44:43.136829 kubelet[3266]: I0430 00:44:43.135875 3266 apiserver.go:52] "Watching apiserver" Apr 30 00:44:43.160197 sudo[3280]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:43.180959 kubelet[3266]: I0430 00:44:43.180921 3266 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:44:43.312743 kubelet[3266]: E0430 00:44:43.312444 3266 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-0\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-0" Apr 30 00:44:43.460845 kubelet[3266]: I0430 00:44:43.458858 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-0" podStartSLOduration=1.458833563 podStartE2EDuration="1.458833563s" podCreationTimestamp="2025-04-30 00:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:43.412175606 +0000 UTC m=+1.441045532" watchObservedRunningTime="2025-04-30 00:44:43.458833563 +0000 UTC m=+1.487703405" Apr 30 00:44:43.517281 kubelet[3266]: I0430 00:44:43.515486 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-0" podStartSLOduration=4.515442591 podStartE2EDuration="4.515442591s" podCreationTimestamp="2025-04-30 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:43.465450735 +0000 UTC m=+1.494320577" watchObservedRunningTime="2025-04-30 00:44:43.515442591 +0000 UTC m=+1.544312445" Apr 30 00:44:43.644773 kubelet[3266]: I0430 00:44:43.643959 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-0" podStartSLOduration=2.643937284 podStartE2EDuration="2.643937284s" podCreationTimestamp="2025-04-30 00:44:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:43.527624739 +0000 UTC m=+1.556494593" watchObservedRunningTime="2025-04-30 00:44:43.643937284 +0000 UTC m=+1.672807138" Apr 30 00:44:43.681544 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3307) Apr 30 00:44:46.585335 sudo[2339]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:46.623489 sshd[2336]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:46.630845 systemd[1]: sshd@6-172.31.24.0:22-147.75.109.163:32838.service: Deactivated successfully. Apr 30 00:44:46.634294 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:44:46.634785 systemd[1]: session-7.scope: Consumed 13.685s CPU time, 184.9M memory peak, 0B memory swap peak. Apr 30 00:44:46.635703 systemd-logind[1995]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:44:46.639331 systemd-logind[1995]: Removed session 7. Apr 30 00:44:57.068566 kubelet[3266]: I0430 00:44:57.068346 3266 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:44:57.069979 containerd[2021]: time="2025-04-30T00:44:57.069441158Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:44:57.071494 kubelet[3266]: I0430 00:44:57.070909 3266 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:44:57.915313 kubelet[3266]: I0430 00:44:57.914385 3266 topology_manager.go:215] "Topology Admit Handler" podUID="7f2894d3-a0ed-4799-992b-2f369be45ac3" podNamespace="kube-system" podName="kube-proxy-vqrl6" Apr 30 00:44:57.930576 kubelet[3266]: W0430 00:44:57.929842 3266 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-24-0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-0' and this object Apr 30 00:44:57.930576 kubelet[3266]: E0430 00:44:57.929956 3266 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-24-0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-0' and this object Apr 30 00:44:57.930576 kubelet[3266]: W0430 00:44:57.930146 3266 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-0' and this object Apr 30 00:44:57.930576 kubelet[3266]: E0430 00:44:57.930191 3266 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-0" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-0' and this object Apr 30 00:44:57.935388 systemd[1]: Created slice kubepods-besteffort-pod7f2894d3_a0ed_4799_992b_2f369be45ac3.slice - libcontainer container kubepods-besteffort-pod7f2894d3_a0ed_4799_992b_2f369be45ac3.slice. Apr 30 00:44:57.966027 kubelet[3266]: I0430 00:44:57.965957 3266 topology_manager.go:215] "Topology Admit Handler" podUID="4663941c-a276-4655-8c11-f802888445f8" podNamespace="kube-system" podName="cilium-rngsj" Apr 30 00:44:57.986547 systemd[1]: Created slice kubepods-burstable-pod4663941c_a276_4655_8c11_f802888445f8.slice - libcontainer container kubepods-burstable-pod4663941c_a276_4655_8c11_f802888445f8.slice. Apr 30 00:44:58.002749 kubelet[3266]: I0430 00:44:58.002671 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-etc-cni-netd\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.002895 kubelet[3266]: I0430 00:44:58.002768 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-host-proc-sys-kernel\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.002895 kubelet[3266]: I0430 00:44:58.002828 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f2894d3-a0ed-4799-992b-2f369be45ac3-lib-modules\") pod \"kube-proxy-vqrl6\" (UID: \"7f2894d3-a0ed-4799-992b-2f369be45ac3\") " pod="kube-system/kube-proxy-vqrl6" Apr 30 00:44:58.002895 kubelet[3266]: I0430 00:44:58.002881 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-host-proc-sys-net\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.003128 kubelet[3266]: I0430 00:44:58.002930 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f2894d3-a0ed-4799-992b-2f369be45ac3-kube-proxy\") pod \"kube-proxy-vqrl6\" (UID: \"7f2894d3-a0ed-4799-992b-2f369be45ac3\") " pod="kube-system/kube-proxy-vqrl6" Apr 30 00:44:58.003128 kubelet[3266]: I0430 00:44:58.002978 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cilium-run\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.003128 kubelet[3266]: I0430 00:44:58.003023 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-lib-modules\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.003128 kubelet[3266]: I0430 00:44:58.003078 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cilium-cgroup\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.005816 kubelet[3266]: I0430 00:44:58.003133 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-hubble-tls\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.005816 kubelet[3266]: I0430 00:44:58.003173 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-hostproc\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.005816 kubelet[3266]: I0430 00:44:58.003218 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cni-path\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.005816 kubelet[3266]: I0430 00:44:58.003271 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-bpf-maps\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.005816 kubelet[3266]: I0430 00:44:58.003326 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f2894d3-a0ed-4799-992b-2f369be45ac3-xtables-lock\") pod \"kube-proxy-vqrl6\" (UID: \"7f2894d3-a0ed-4799-992b-2f369be45ac3\") " pod="kube-system/kube-proxy-vqrl6" Apr 30 00:44:58.005816 kubelet[3266]: I0430 00:44:58.003374 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4663941c-a276-4655-8c11-f802888445f8-cilium-config-path\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.006168 kubelet[3266]: I0430 00:44:58.003412 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkr9\" (UniqueName: \"kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-kube-api-access-8tkr9\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.006168 kubelet[3266]: I0430 00:44:58.003465 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kklq6\" (UniqueName: \"kubernetes.io/projected/7f2894d3-a0ed-4799-992b-2f369be45ac3-kube-api-access-kklq6\") pod \"kube-proxy-vqrl6\" (UID: \"7f2894d3-a0ed-4799-992b-2f369be45ac3\") " pod="kube-system/kube-proxy-vqrl6" Apr 30 00:44:58.006168 kubelet[3266]: I0430 00:44:58.004585 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-xtables-lock\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.006168 kubelet[3266]: I0430 00:44:58.004696 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4663941c-a276-4655-8c11-f802888445f8-clustermesh-secrets\") pod \"cilium-rngsj\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " pod="kube-system/cilium-rngsj" Apr 30 00:44:58.251710 kubelet[3266]: I0430 00:44:58.250934 3266 topology_manager.go:215] "Topology Admit Handler" podUID="9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0" podNamespace="kube-system" podName="cilium-operator-599987898-hb8pb" Apr 30 00:44:58.270075 systemd[1]: Created slice kubepods-besteffort-pod9a8dedf3_c2f4_4ec5_9f9f_36506e3bdea0.slice - libcontainer container kubepods-besteffort-pod9a8dedf3_c2f4_4ec5_9f9f_36506e3bdea0.slice. Apr 30 00:44:58.307593 kubelet[3266]: I0430 00:44:58.307492 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0-cilium-config-path\") pod \"cilium-operator-599987898-hb8pb\" (UID: \"9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0\") " pod="kube-system/cilium-operator-599987898-hb8pb" Apr 30 00:44:58.307769 kubelet[3266]: I0430 00:44:58.307599 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w72h5\" (UniqueName: \"kubernetes.io/projected/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0-kube-api-access-w72h5\") pod \"cilium-operator-599987898-hb8pb\" (UID: \"9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0\") " pod="kube-system/cilium-operator-599987898-hb8pb" Apr 30 00:44:59.106759 kubelet[3266]: E0430 00:44:59.106691 3266 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:59.107021 kubelet[3266]: E0430 00:44:59.106850 3266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7f2894d3-a0ed-4799-992b-2f369be45ac3-kube-proxy podName:7f2894d3-a0ed-4799-992b-2f369be45ac3 nodeName:}" failed. No retries permitted until 2025-04-30 00:44:59.606814224 +0000 UTC m=+17.635684054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/7f2894d3-a0ed-4799-992b-2f369be45ac3-kube-proxy") pod "kube-proxy-vqrl6" (UID: "7f2894d3-a0ed-4799-992b-2f369be45ac3") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:59.213595 kubelet[3266]: E0430 00:44:59.213255 3266 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:59.213595 kubelet[3266]: E0430 00:44:59.213303 3266 projected.go:200] Error preparing data for projected volume kube-api-access-kklq6 for pod kube-system/kube-proxy-vqrl6: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:59.213595 kubelet[3266]: E0430 00:44:59.213404 3266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f2894d3-a0ed-4799-992b-2f369be45ac3-kube-api-access-kklq6 podName:7f2894d3-a0ed-4799-992b-2f369be45ac3 nodeName:}" failed. No retries permitted until 2025-04-30 00:44:59.713375833 +0000 UTC m=+17.742245675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kklq6" (UniqueName: "kubernetes.io/projected/7f2894d3-a0ed-4799-992b-2f369be45ac3-kube-api-access-kklq6") pod "kube-proxy-vqrl6" (UID: "7f2894d3-a0ed-4799-992b-2f369be45ac3") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:59.215661 kubelet[3266]: E0430 00:44:59.215457 3266 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:59.215661 kubelet[3266]: E0430 00:44:59.215537 3266 projected.go:200] Error preparing data for projected volume kube-api-access-8tkr9 for pod kube-system/cilium-rngsj: failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:59.215661 kubelet[3266]: E0430 00:44:59.215642 3266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-kube-api-access-8tkr9 podName:4663941c-a276-4655-8c11-f802888445f8 nodeName:}" failed. No retries permitted until 2025-04-30 00:44:59.715615453 +0000 UTC m=+17.744485295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8tkr9" (UniqueName: "kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-kube-api-access-8tkr9") pod "cilium-rngsj" (UID: "4663941c-a276-4655-8c11-f802888445f8") : failed to sync configmap cache: timed out waiting for the condition Apr 30 00:44:59.482474 containerd[2021]: time="2025-04-30T00:44:59.481168638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hb8pb,Uid:9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:59.536381 containerd[2021]: time="2025-04-30T00:44:59.535914786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:59.536381 containerd[2021]: time="2025-04-30T00:44:59.536112750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:59.536381 containerd[2021]: time="2025-04-30T00:44:59.536149542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:59.537239 containerd[2021]: time="2025-04-30T00:44:59.536967306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:59.573477 systemd[1]: run-containerd-runc-k8s.io-fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d-runc.NW6L8c.mount: Deactivated successfully. Apr 30 00:44:59.586848 systemd[1]: Started cri-containerd-fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d.scope - libcontainer container fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d. Apr 30 00:44:59.655281 containerd[2021]: time="2025-04-30T00:44:59.655224247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hb8pb,Uid:9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\"" Apr 30 00:44:59.659766 containerd[2021]: time="2025-04-30T00:44:59.659437711Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:44:59.752998 containerd[2021]: time="2025-04-30T00:44:59.752293712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqrl6,Uid:7f2894d3-a0ed-4799-992b-2f369be45ac3,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:59.797712 containerd[2021]: time="2025-04-30T00:44:59.796710824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:59.797712 containerd[2021]: time="2025-04-30T00:44:59.796920356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:59.797712 containerd[2021]: time="2025-04-30T00:44:59.796947296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:59.797712 containerd[2021]: time="2025-04-30T00:44:59.797617760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:59.800504 containerd[2021]: time="2025-04-30T00:44:59.800447144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rngsj,Uid:4663941c-a276-4655-8c11-f802888445f8,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:59.834919 systemd[1]: Started cri-containerd-7c0ac6daebfca50cfad0b5c5abfbf11bcfd9428d4530d606f166944074aaf3c3.scope - libcontainer container 7c0ac6daebfca50cfad0b5c5abfbf11bcfd9428d4530d606f166944074aaf3c3. Apr 30 00:44:59.868093 containerd[2021]: time="2025-04-30T00:44:59.867718244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:59.868866 containerd[2021]: time="2025-04-30T00:44:59.867863540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:59.868866 containerd[2021]: time="2025-04-30T00:44:59.868049444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:59.872235 containerd[2021]: time="2025-04-30T00:44:59.871800296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:59.910302 containerd[2021]: time="2025-04-30T00:44:59.909927728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqrl6,Uid:7f2894d3-a0ed-4799-992b-2f369be45ac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c0ac6daebfca50cfad0b5c5abfbf11bcfd9428d4530d606f166944074aaf3c3\"" Apr 30 00:44:59.922953 systemd[1]: Started cri-containerd-820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11.scope - libcontainer container 820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11. Apr 30 00:44:59.935648 containerd[2021]: time="2025-04-30T00:44:59.935565776Z" level=info msg="CreateContainer within sandbox \"7c0ac6daebfca50cfad0b5c5abfbf11bcfd9428d4530d606f166944074aaf3c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:44:59.992070 containerd[2021]: time="2025-04-30T00:44:59.992006541Z" level=info msg="CreateContainer within sandbox \"7c0ac6daebfca50cfad0b5c5abfbf11bcfd9428d4530d606f166944074aaf3c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29d0315c227da73dd8cf6548e248e59fa40399623e6b18ab60f16183deef81e3\"" Apr 30 00:44:59.996671 containerd[2021]: time="2025-04-30T00:44:59.995848137Z" level=info msg="StartContainer for \"29d0315c227da73dd8cf6548e248e59fa40399623e6b18ab60f16183deef81e3\"" Apr 30 00:45:00.000938 containerd[2021]: time="2025-04-30T00:45:00.000859145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rngsj,Uid:4663941c-a276-4655-8c11-f802888445f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\"" Apr 30 00:45:00.055828 systemd[1]: Started cri-containerd-29d0315c227da73dd8cf6548e248e59fa40399623e6b18ab60f16183deef81e3.scope - libcontainer container 29d0315c227da73dd8cf6548e248e59fa40399623e6b18ab60f16183deef81e3. Apr 30 00:45:00.109803 containerd[2021]: time="2025-04-30T00:45:00.109719701Z" level=info msg="StartContainer for \"29d0315c227da73dd8cf6548e248e59fa40399623e6b18ab60f16183deef81e3\" returns successfully" Apr 30 00:45:01.112459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200061806.mount: Deactivated successfully. Apr 30 00:45:01.825492 containerd[2021]: time="2025-04-30T00:45:01.825431446Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:01.827494 containerd[2021]: time="2025-04-30T00:45:01.827436814Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 00:45:01.829896 containerd[2021]: time="2025-04-30T00:45:01.829820170Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:01.833016 containerd[2021]: time="2025-04-30T00:45:01.832808662Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.173307015s" Apr 30 00:45:01.833016 containerd[2021]: time="2025-04-30T00:45:01.832873498Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 00:45:01.837158 containerd[2021]: time="2025-04-30T00:45:01.836810074Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:45:01.840467 containerd[2021]: time="2025-04-30T00:45:01.839852878Z" level=info msg="CreateContainer within sandbox \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:45:01.869310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017125282.mount: Deactivated successfully. Apr 30 00:45:01.875334 containerd[2021]: time="2025-04-30T00:45:01.875161882Z" level=info msg="CreateContainer within sandbox \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\"" Apr 30 00:45:01.876162 containerd[2021]: time="2025-04-30T00:45:01.876087166Z" level=info msg="StartContainer for \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\"" Apr 30 00:45:01.936891 systemd[1]: Started cri-containerd-0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2.scope - libcontainer container 0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2. Apr 30 00:45:02.007757 containerd[2021]: time="2025-04-30T00:45:02.007664839Z" level=info msg="StartContainer for \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\" returns successfully" Apr 30 00:45:02.278607 kubelet[3266]: I0430 00:45:02.278474 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vqrl6" podStartSLOduration=5.27844616 podStartE2EDuration="5.27844616s" podCreationTimestamp="2025-04-30 00:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:00.395719015 +0000 UTC m=+18.424588857" watchObservedRunningTime="2025-04-30 00:45:02.27844616 +0000 UTC m=+20.307316002" Apr 30 00:45:02.404328 kubelet[3266]: I0430 00:45:02.403077 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-hb8pb" podStartSLOduration=2.225938586 podStartE2EDuration="4.403055733s" podCreationTimestamp="2025-04-30 00:44:58 +0000 UTC" firstStartedPulling="2025-04-30 00:44:59.658413763 +0000 UTC m=+17.687283593" lastFinishedPulling="2025-04-30 00:45:01.835530898 +0000 UTC m=+19.864400740" observedRunningTime="2025-04-30 00:45:02.400539009 +0000 UTC m=+20.429408863" watchObservedRunningTime="2025-04-30 00:45:02.403055733 +0000 UTC m=+20.431925587" Apr 30 00:45:07.871105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850582648.mount: Deactivated successfully. Apr 30 00:45:10.742595 containerd[2021]: time="2025-04-30T00:45:10.741695022Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:10.744567 containerd[2021]: time="2025-04-30T00:45:10.744144774Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 00:45:10.747129 containerd[2021]: time="2025-04-30T00:45:10.747025254Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:10.752805 containerd[2021]: time="2025-04-30T00:45:10.752703162Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.915812712s" Apr 30 00:45:10.752805 containerd[2021]: time="2025-04-30T00:45:10.752810514Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 00:45:10.758878 containerd[2021]: time="2025-04-30T00:45:10.758771754Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:45:10.789574 containerd[2021]: time="2025-04-30T00:45:10.789474294Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\"" Apr 30 00:45:10.790603 containerd[2021]: time="2025-04-30T00:45:10.790409802Z" level=info msg="StartContainer for \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\"" Apr 30 00:45:10.849828 systemd[1]: Started cri-containerd-81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df.scope - libcontainer container 81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df. Apr 30 00:45:10.897338 containerd[2021]: time="2025-04-30T00:45:10.897232879Z" level=info msg="StartContainer for \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\" returns successfully" Apr 30 00:45:10.925348 systemd[1]: cri-containerd-81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df.scope: Deactivated successfully. Apr 30 00:45:11.436084 containerd[2021]: time="2025-04-30T00:45:11.435993390Z" level=info msg="shim disconnected" id=81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df namespace=k8s.io Apr 30 00:45:11.437005 containerd[2021]: time="2025-04-30T00:45:11.436794906Z" level=warning msg="cleaning up after shim disconnected" id=81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df namespace=k8s.io Apr 30 00:45:11.437005 containerd[2021]: time="2025-04-30T00:45:11.436907742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:11.777840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df-rootfs.mount: Deactivated successfully. Apr 30 00:45:12.441272 containerd[2021]: time="2025-04-30T00:45:12.440921839Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:45:12.473667 containerd[2021]: time="2025-04-30T00:45:12.473355883Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\"" Apr 30 00:45:12.479948 containerd[2021]: time="2025-04-30T00:45:12.479446723Z" level=info msg="StartContainer for \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\"" Apr 30 00:45:12.554853 systemd[1]: Started cri-containerd-ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a.scope - libcontainer container ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a. Apr 30 00:45:12.616702 containerd[2021]: time="2025-04-30T00:45:12.615486043Z" level=info msg="StartContainer for \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\" returns successfully" Apr 30 00:45:12.648703 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:45:12.649295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:12.649439 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:45:12.659223 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:45:12.659846 systemd[1]: cri-containerd-ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a.scope: Deactivated successfully. Apr 30 00:45:12.718678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:12.728679 containerd[2021]: time="2025-04-30T00:45:12.728591108Z" level=info msg="shim disconnected" id=ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a namespace=k8s.io Apr 30 00:45:12.728679 containerd[2021]: time="2025-04-30T00:45:12.728666564Z" level=warning msg="cleaning up after shim disconnected" id=ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a namespace=k8s.io Apr 30 00:45:12.729078 containerd[2021]: time="2025-04-30T00:45:12.728688596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:12.749713 containerd[2021]: time="2025-04-30T00:45:12.749640836Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:45:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:45:12.777761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a-rootfs.mount: Deactivated successfully. Apr 30 00:45:13.447572 containerd[2021]: time="2025-04-30T00:45:13.446076620Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:45:13.475255 containerd[2021]: time="2025-04-30T00:45:13.475173152Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\"" Apr 30 00:45:13.480652 containerd[2021]: time="2025-04-30T00:45:13.476888036Z" level=info msg="StartContainer for \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\"" Apr 30 00:45:13.548043 systemd[1]: Started cri-containerd-3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b.scope - libcontainer container 3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b. Apr 30 00:45:13.604475 containerd[2021]: time="2025-04-30T00:45:13.604399088Z" level=info msg="StartContainer for \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\" returns successfully" Apr 30 00:45:13.610865 systemd[1]: cri-containerd-3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b.scope: Deactivated successfully. Apr 30 00:45:13.662376 containerd[2021]: time="2025-04-30T00:45:13.662216337Z" level=info msg="shim disconnected" id=3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b namespace=k8s.io Apr 30 00:45:13.662376 containerd[2021]: time="2025-04-30T00:45:13.662308353Z" level=warning msg="cleaning up after shim disconnected" id=3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b namespace=k8s.io Apr 30 00:45:13.662376 containerd[2021]: time="2025-04-30T00:45:13.662330181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:13.777784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b-rootfs.mount: Deactivated successfully. Apr 30 00:45:14.455592 containerd[2021]: time="2025-04-30T00:45:14.455036757Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:45:14.486960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount860874955.mount: Deactivated successfully. Apr 30 00:45:14.493059 containerd[2021]: time="2025-04-30T00:45:14.492966033Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\"" Apr 30 00:45:14.497645 containerd[2021]: time="2025-04-30T00:45:14.497164497Z" level=info msg="StartContainer for \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\"" Apr 30 00:45:14.563050 systemd[1]: Started cri-containerd-6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c.scope - libcontainer container 6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c. Apr 30 00:45:14.616888 systemd[1]: cri-containerd-6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c.scope: Deactivated successfully. Apr 30 00:45:14.620575 containerd[2021]: time="2025-04-30T00:45:14.620505117Z" level=info msg="StartContainer for \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\" returns successfully" Apr 30 00:45:14.666698 containerd[2021]: time="2025-04-30T00:45:14.666611770Z" level=info msg="shim disconnected" id=6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c namespace=k8s.io Apr 30 00:45:14.666698 containerd[2021]: time="2025-04-30T00:45:14.666689806Z" level=warning msg="cleaning up after shim disconnected" id=6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c namespace=k8s.io Apr 30 00:45:14.667164 containerd[2021]: time="2025-04-30T00:45:14.666716194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:14.778285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c-rootfs.mount: Deactivated successfully. Apr 30 00:45:15.465904 containerd[2021]: time="2025-04-30T00:45:15.465838426Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:45:15.508304 containerd[2021]: time="2025-04-30T00:45:15.508147810Z" level=info msg="CreateContainer within sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\"" Apr 30 00:45:15.509562 containerd[2021]: time="2025-04-30T00:45:15.509477950Z" level=info msg="StartContainer for \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\"" Apr 30 00:45:15.571987 systemd[1]: Started cri-containerd-65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541.scope - libcontainer container 65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541. Apr 30 00:45:15.632585 containerd[2021]: time="2025-04-30T00:45:15.632294170Z" level=info msg="StartContainer for \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\" returns successfully" Apr 30 00:45:15.867340 kubelet[3266]: I0430 00:45:15.867288 3266 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:45:15.921220 kubelet[3266]: I0430 00:45:15.920861 3266 topology_manager.go:215] "Topology Admit Handler" podUID="bc2966f1-b7f2-4e51-864d-842f211314f4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-82f2v" Apr 30 00:45:15.926726 kubelet[3266]: I0430 00:45:15.925787 3266 topology_manager.go:215] "Topology Admit Handler" podUID="589faa7e-5915-4833-b95b-e8cc01d85f13" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rgffl" Apr 30 00:45:15.950222 systemd[1]: Created slice kubepods-burstable-podbc2966f1_b7f2_4e51_864d_842f211314f4.slice - libcontainer container kubepods-burstable-podbc2966f1_b7f2_4e51_864d_842f211314f4.slice. Apr 30 00:45:15.956379 kubelet[3266]: I0430 00:45:15.956122 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589faa7e-5915-4833-b95b-e8cc01d85f13-config-volume\") pod \"coredns-7db6d8ff4d-rgffl\" (UID: \"589faa7e-5915-4833-b95b-e8cc01d85f13\") " pod="kube-system/coredns-7db6d8ff4d-rgffl" Apr 30 00:45:15.956379 kubelet[3266]: I0430 00:45:15.956232 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc2966f1-b7f2-4e51-864d-842f211314f4-config-volume\") pod \"coredns-7db6d8ff4d-82f2v\" (UID: \"bc2966f1-b7f2-4e51-864d-842f211314f4\") " pod="kube-system/coredns-7db6d8ff4d-82f2v" Apr 30 00:45:15.956892 kubelet[3266]: I0430 00:45:15.956326 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvfvd\" (UniqueName: \"kubernetes.io/projected/589faa7e-5915-4833-b95b-e8cc01d85f13-kube-api-access-mvfvd\") pod \"coredns-7db6d8ff4d-rgffl\" (UID: \"589faa7e-5915-4833-b95b-e8cc01d85f13\") " pod="kube-system/coredns-7db6d8ff4d-rgffl" Apr 30 00:45:15.959430 kubelet[3266]: I0430 00:45:15.958381 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gglpz\" (UniqueName: \"kubernetes.io/projected/bc2966f1-b7f2-4e51-864d-842f211314f4-kube-api-access-gglpz\") pod \"coredns-7db6d8ff4d-82f2v\" (UID: \"bc2966f1-b7f2-4e51-864d-842f211314f4\") " pod="kube-system/coredns-7db6d8ff4d-82f2v" Apr 30 00:45:15.980390 systemd[1]: Created slice kubepods-burstable-pod589faa7e_5915_4833_b95b_e8cc01d85f13.slice - libcontainer container kubepods-burstable-pod589faa7e_5915_4833_b95b_e8cc01d85f13.slice. Apr 30 00:45:16.271464 containerd[2021]: time="2025-04-30T00:45:16.271311010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82f2v,Uid:bc2966f1-b7f2-4e51-864d-842f211314f4,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:16.291776 containerd[2021]: time="2025-04-30T00:45:16.291290566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rgffl,Uid:589faa7e-5915-4833-b95b-e8cc01d85f13,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:18.610811 (udev-worker)[4274]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:18.612248 (udev-worker)[4238]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:18.615607 systemd-networkd[1846]: cilium_host: Link UP Apr 30 00:45:18.616304 systemd-networkd[1846]: cilium_net: Link UP Apr 30 00:45:18.617046 systemd-networkd[1846]: cilium_net: Gained carrier Apr 30 00:45:18.618221 systemd-networkd[1846]: cilium_host: Gained carrier Apr 30 00:45:18.801804 (udev-worker)[4282]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:18.814657 systemd-networkd[1846]: cilium_vxlan: Link UP Apr 30 00:45:18.814680 systemd-networkd[1846]: cilium_vxlan: Gained carrier Apr 30 00:45:19.164720 systemd-networkd[1846]: cilium_host: Gained IPv6LL Apr 30 00:45:19.228081 systemd-networkd[1846]: cilium_net: Gained IPv6LL Apr 30 00:45:19.332649 kernel: NET: Registered PF_ALG protocol family Apr 30 00:45:20.776328 systemd-networkd[1846]: cilium_vxlan: Gained IPv6LL Apr 30 00:45:20.782921 systemd-networkd[1846]: lxc_health: Link UP Apr 30 00:45:20.788329 systemd-networkd[1846]: lxc_health: Gained carrier Apr 30 00:45:21.403315 systemd-networkd[1846]: lxcf2884373924e: Link UP Apr 30 00:45:21.413664 kernel: eth0: renamed from tmp74697 Apr 30 00:45:21.419770 systemd-networkd[1846]: lxcf2884373924e: Gained carrier Apr 30 00:45:21.439910 systemd-networkd[1846]: lxcf30b4078ab22: Link UP Apr 30 00:45:21.444653 kernel: eth0: renamed from tmpb1389 Apr 30 00:45:21.462986 systemd-networkd[1846]: lxcf30b4078ab22: Gained carrier Apr 30 00:45:21.463059 (udev-worker)[4286]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:21.849073 kubelet[3266]: I0430 00:45:21.848724 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rngsj" podStartSLOduration=14.098741896 podStartE2EDuration="24.848678477s" podCreationTimestamp="2025-04-30 00:44:57 +0000 UTC" firstStartedPulling="2025-04-30 00:45:00.004131785 +0000 UTC m=+18.033001627" lastFinishedPulling="2025-04-30 00:45:10.754068366 +0000 UTC m=+28.782938208" observedRunningTime="2025-04-30 00:45:16.519894599 +0000 UTC m=+34.548764453" watchObservedRunningTime="2025-04-30 00:45:21.848678477 +0000 UTC m=+39.877548331" Apr 30 00:45:22.299838 systemd-networkd[1846]: lxc_health: Gained IPv6LL Apr 30 00:45:22.875835 systemd-networkd[1846]: lxcf2884373924e: Gained IPv6LL Apr 30 00:45:23.516900 systemd-networkd[1846]: lxcf30b4078ab22: Gained IPv6LL Apr 30 00:45:26.176618 ntpd[1988]: Listen normally on 7 cilium_host 192.168.0.90:123 Apr 30 00:45:26.177589 ntpd[1988]: 30 Apr 00:45:26 ntpd[1988]: Listen normally on 7 cilium_host 192.168.0.90:123 Apr 30 00:45:26.178443 ntpd[1988]: 30 Apr 00:45:26 ntpd[1988]: Listen normally on 8 cilium_net [fe80::c833:4ff:fede:1bc6%4]:123 Apr 30 00:45:26.178443 ntpd[1988]: 30 Apr 00:45:26 ntpd[1988]: Listen normally on 9 cilium_host [fe80::a873:9aff:fecc:a04a%5]:123 Apr 30 00:45:26.178443 ntpd[1988]: 30 Apr 00:45:26 ntpd[1988]: Listen normally on 10 cilium_vxlan [fe80::d48f:57ff:fe24:e926%6]:123 Apr 30 00:45:26.178443 ntpd[1988]: 30 Apr 00:45:26 ntpd[1988]: Listen normally on 11 lxc_health [fe80::b0de:f5ff:fe20:9799%8]:123 Apr 30 00:45:26.178443 ntpd[1988]: 30 Apr 00:45:26 ntpd[1988]: Listen normally on 12 lxcf2884373924e [fe80::c0e3:6eff:fe9f:475e%10]:123 Apr 30 00:45:26.178443 ntpd[1988]: 30 Apr 00:45:26 ntpd[1988]: Listen normally on 13 lxcf30b4078ab22 [fe80::c843:d4ff:fe83:b1aa%12]:123 Apr 30 00:45:26.177890 ntpd[1988]: Listen normally on 8 cilium_net [fe80::c833:4ff:fede:1bc6%4]:123 Apr 30 00:45:26.177993 ntpd[1988]: Listen normally on 9 cilium_host [fe80::a873:9aff:fecc:a04a%5]:123 Apr 30 00:45:26.178061 ntpd[1988]: Listen normally on 10 cilium_vxlan [fe80::d48f:57ff:fe24:e926%6]:123 Apr 30 00:45:26.178127 ntpd[1988]: Listen normally on 11 lxc_health [fe80::b0de:f5ff:fe20:9799%8]:123 Apr 30 00:45:26.178194 ntpd[1988]: Listen normally on 12 lxcf2884373924e [fe80::c0e3:6eff:fe9f:475e%10]:123 Apr 30 00:45:26.178260 ntpd[1988]: Listen normally on 13 lxcf30b4078ab22 [fe80::c843:d4ff:fe83:b1aa%12]:123 Apr 30 00:45:30.517049 containerd[2021]: time="2025-04-30T00:45:30.515990364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:30.517049 containerd[2021]: time="2025-04-30T00:45:30.516144924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:30.517049 containerd[2021]: time="2025-04-30T00:45:30.516185328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:30.517049 containerd[2021]: time="2025-04-30T00:45:30.516411636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:30.545181 containerd[2021]: time="2025-04-30T00:45:30.535132524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:30.545181 containerd[2021]: time="2025-04-30T00:45:30.535233372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:30.545181 containerd[2021]: time="2025-04-30T00:45:30.535273536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:30.545181 containerd[2021]: time="2025-04-30T00:45:30.535501200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:30.597837 systemd[1]: Started cri-containerd-746972b72a815dcd81a72a55ed5522269e0bf30b95ff22df4a0ad13fa8c03507.scope - libcontainer container 746972b72a815dcd81a72a55ed5522269e0bf30b95ff22df4a0ad13fa8c03507. Apr 30 00:45:30.683762 systemd[1]: Started cri-containerd-b1389d9bc194cd64716c85320b6c1423cb0f69fba530f7cffa49ff054b78aa34.scope - libcontainer container b1389d9bc194cd64716c85320b6c1423cb0f69fba530f7cffa49ff054b78aa34. Apr 30 00:45:30.763480 containerd[2021]: time="2025-04-30T00:45:30.761553938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-82f2v,Uid:bc2966f1-b7f2-4e51-864d-842f211314f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"746972b72a815dcd81a72a55ed5522269e0bf30b95ff22df4a0ad13fa8c03507\"" Apr 30 00:45:30.776854 containerd[2021]: time="2025-04-30T00:45:30.776683838Z" level=info msg="CreateContainer within sandbox \"746972b72a815dcd81a72a55ed5522269e0bf30b95ff22df4a0ad13fa8c03507\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:45:30.820969 containerd[2021]: time="2025-04-30T00:45:30.820883066Z" level=info msg="CreateContainer within sandbox \"746972b72a815dcd81a72a55ed5522269e0bf30b95ff22df4a0ad13fa8c03507\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be64a354ed832c8149fbb3ea4e8a39f6db7fa63048b46e42ae660ad97e231bbe\"" Apr 30 00:45:30.825822 containerd[2021]: time="2025-04-30T00:45:30.824015378Z" level=info msg="StartContainer for \"be64a354ed832c8149fbb3ea4e8a39f6db7fa63048b46e42ae660ad97e231bbe\"" Apr 30 00:45:30.858379 containerd[2021]: time="2025-04-30T00:45:30.858306242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rgffl,Uid:589faa7e-5915-4833-b95b-e8cc01d85f13,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1389d9bc194cd64716c85320b6c1423cb0f69fba530f7cffa49ff054b78aa34\"" Apr 30 00:45:30.870541 containerd[2021]: time="2025-04-30T00:45:30.870460826Z" level=info msg="CreateContainer within sandbox \"b1389d9bc194cd64716c85320b6c1423cb0f69fba530f7cffa49ff054b78aa34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:45:30.905123 containerd[2021]: time="2025-04-30T00:45:30.905050574Z" level=info msg="CreateContainer within sandbox \"b1389d9bc194cd64716c85320b6c1423cb0f69fba530f7cffa49ff054b78aa34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e4862df5b8da973f6e749389468befe888436f5cf635b892831fdc0d4a90493\"" Apr 30 00:45:30.909412 containerd[2021]: time="2025-04-30T00:45:30.909340838Z" level=info msg="StartContainer for \"4e4862df5b8da973f6e749389468befe888436f5cf635b892831fdc0d4a90493\"" Apr 30 00:45:30.921316 systemd[1]: Started cri-containerd-be64a354ed832c8149fbb3ea4e8a39f6db7fa63048b46e42ae660ad97e231bbe.scope - libcontainer container be64a354ed832c8149fbb3ea4e8a39f6db7fa63048b46e42ae660ad97e231bbe. Apr 30 00:45:31.013865 systemd[1]: Started cri-containerd-4e4862df5b8da973f6e749389468befe888436f5cf635b892831fdc0d4a90493.scope - libcontainer container 4e4862df5b8da973f6e749389468befe888436f5cf635b892831fdc0d4a90493. Apr 30 00:45:31.041771 containerd[2021]: time="2025-04-30T00:45:31.040581779Z" level=info msg="StartContainer for \"be64a354ed832c8149fbb3ea4e8a39f6db7fa63048b46e42ae660ad97e231bbe\" returns successfully" Apr 30 00:45:31.139117 containerd[2021]: time="2025-04-30T00:45:31.139008503Z" level=info msg="StartContainer for \"4e4862df5b8da973f6e749389468befe888436f5cf635b892831fdc0d4a90493\" returns successfully" Apr 30 00:45:31.602822 kubelet[3266]: I0430 00:45:31.602379 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-82f2v" podStartSLOduration=33.602340434 podStartE2EDuration="33.602340434s" podCreationTimestamp="2025-04-30 00:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:31.568399418 +0000 UTC m=+49.597269344" watchObservedRunningTime="2025-04-30 00:45:31.602340434 +0000 UTC m=+49.631210288" Apr 30 00:45:31.606218 kubelet[3266]: I0430 00:45:31.604778 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rgffl" podStartSLOduration=33.604734986 podStartE2EDuration="33.604734986s" podCreationTimestamp="2025-04-30 00:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:31.599908886 +0000 UTC m=+49.628778740" watchObservedRunningTime="2025-04-30 00:45:31.604734986 +0000 UTC m=+49.633605956" Apr 30 00:45:32.415044 systemd[1]: Started sshd@7-172.31.24.0:22-147.75.109.163:34406.service - OpenSSH per-connection server daemon (147.75.109.163:34406). Apr 30 00:45:32.689470 sshd[4813]: Accepted publickey for core from 147.75.109.163 port 34406 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:32.692080 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:32.704641 systemd-logind[1995]: New session 8 of user core. Apr 30 00:45:32.709828 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:45:33.064650 sshd[4813]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:33.070986 systemd-logind[1995]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:45:33.072894 systemd[1]: sshd@7-172.31.24.0:22-147.75.109.163:34406.service: Deactivated successfully. Apr 30 00:45:33.077365 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:45:33.080454 systemd-logind[1995]: Removed session 8. Apr 30 00:45:38.121237 systemd[1]: Started sshd@8-172.31.24.0:22-147.75.109.163:37836.service - OpenSSH per-connection server daemon (147.75.109.163:37836). Apr 30 00:45:38.390976 sshd[4835]: Accepted publickey for core from 147.75.109.163 port 37836 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:38.393831 sshd[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:38.401721 systemd-logind[1995]: New session 9 of user core. Apr 30 00:45:38.411862 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:45:38.710305 sshd[4835]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:38.719362 systemd[1]: sshd@8-172.31.24.0:22-147.75.109.163:37836.service: Deactivated successfully. Apr 30 00:45:38.720254 systemd-logind[1995]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:45:38.725310 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:45:38.727496 systemd-logind[1995]: Removed session 9. Apr 30 00:45:43.766204 systemd[1]: Started sshd@9-172.31.24.0:22-147.75.109.163:37846.service - OpenSSH per-connection server daemon (147.75.109.163:37846). Apr 30 00:45:44.030759 sshd[4850]: Accepted publickey for core from 147.75.109.163 port 37846 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:44.033403 sshd[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:44.040849 systemd-logind[1995]: New session 10 of user core. Apr 30 00:45:44.049835 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:45:44.420367 sshd[4850]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:44.427974 systemd-logind[1995]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:45:44.429817 systemd[1]: sshd@9-172.31.24.0:22-147.75.109.163:37846.service: Deactivated successfully. Apr 30 00:45:44.434830 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:45:44.437387 systemd-logind[1995]: Removed session 10. Apr 30 00:45:49.480080 systemd[1]: Started sshd@10-172.31.24.0:22-147.75.109.163:51100.service - OpenSSH per-connection server daemon (147.75.109.163:51100). Apr 30 00:45:49.741657 sshd[4864]: Accepted publickey for core from 147.75.109.163 port 51100 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:49.744436 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:49.753942 systemd-logind[1995]: New session 11 of user core. Apr 30 00:45:49.760813 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:45:50.070077 sshd[4864]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:50.077828 systemd[1]: sshd@10-172.31.24.0:22-147.75.109.163:51100.service: Deactivated successfully. Apr 30 00:45:50.082492 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:45:50.084977 systemd-logind[1995]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:45:50.088422 systemd-logind[1995]: Removed session 11. Apr 30 00:45:50.126140 systemd[1]: Started sshd@11-172.31.24.0:22-147.75.109.163:51112.service - OpenSSH per-connection server daemon (147.75.109.163:51112). Apr 30 00:45:50.391678 sshd[4877]: Accepted publickey for core from 147.75.109.163 port 51112 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:50.394654 sshd[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:50.403816 systemd-logind[1995]: New session 12 of user core. Apr 30 00:45:50.412777 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:45:50.819090 sshd[4877]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:50.830371 systemd[1]: sshd@11-172.31.24.0:22-147.75.109.163:51112.service: Deactivated successfully. Apr 30 00:45:50.841476 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:45:50.846262 systemd-logind[1995]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:45:50.851428 systemd-logind[1995]: Removed session 12. Apr 30 00:45:50.872337 systemd[1]: Started sshd@12-172.31.24.0:22-147.75.109.163:51124.service - OpenSSH per-connection server daemon (147.75.109.163:51124). Apr 30 00:45:51.151096 sshd[4888]: Accepted publickey for core from 147.75.109.163 port 51124 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:51.153775 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:51.162660 systemd-logind[1995]: New session 13 of user core. Apr 30 00:45:51.167010 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:45:51.472219 sshd[4888]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:51.479162 systemd[1]: sshd@12-172.31.24.0:22-147.75.109.163:51124.service: Deactivated successfully. Apr 30 00:45:51.483828 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:45:51.485320 systemd-logind[1995]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:45:51.487795 systemd-logind[1995]: Removed session 13. Apr 30 00:45:56.528215 systemd[1]: Started sshd@13-172.31.24.0:22-147.75.109.163:51136.service - OpenSSH per-connection server daemon (147.75.109.163:51136). Apr 30 00:45:56.793785 sshd[4901]: Accepted publickey for core from 147.75.109.163 port 51136 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:56.798155 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:56.807228 systemd-logind[1995]: New session 14 of user core. Apr 30 00:45:56.813844 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:45:57.111268 sshd[4901]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:57.119707 systemd[1]: sshd@13-172.31.24.0:22-147.75.109.163:51136.service: Deactivated successfully. Apr 30 00:45:57.124271 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:45:57.126054 systemd-logind[1995]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:45:57.129235 systemd-logind[1995]: Removed session 14. Apr 30 00:46:02.167387 systemd[1]: Started sshd@14-172.31.24.0:22-147.75.109.163:54988.service - OpenSSH per-connection server daemon (147.75.109.163:54988). Apr 30 00:46:02.436061 sshd[4916]: Accepted publickey for core from 147.75.109.163 port 54988 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:02.441246 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:02.457084 systemd-logind[1995]: New session 15 of user core. Apr 30 00:46:02.464608 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:46:02.755828 sshd[4916]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:02.761997 systemd[1]: sshd@14-172.31.24.0:22-147.75.109.163:54988.service: Deactivated successfully. Apr 30 00:46:02.761998 systemd-logind[1995]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:46:02.765556 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:46:02.770622 systemd-logind[1995]: Removed session 15. Apr 30 00:46:07.811065 systemd[1]: Started sshd@15-172.31.24.0:22-147.75.109.163:50814.service - OpenSSH per-connection server daemon (147.75.109.163:50814). Apr 30 00:46:08.082563 sshd[4929]: Accepted publickey for core from 147.75.109.163 port 50814 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:08.085461 sshd[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:08.094368 systemd-logind[1995]: New session 16 of user core. Apr 30 00:46:08.100795 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:46:08.403410 sshd[4929]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:08.409003 systemd[1]: sshd@15-172.31.24.0:22-147.75.109.163:50814.service: Deactivated successfully. Apr 30 00:46:08.413813 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:46:08.417766 systemd-logind[1995]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:46:08.420586 systemd-logind[1995]: Removed session 16. Apr 30 00:46:08.459331 systemd[1]: Started sshd@16-172.31.24.0:22-147.75.109.163:50822.service - OpenSSH per-connection server daemon (147.75.109.163:50822). Apr 30 00:46:08.727016 sshd[4942]: Accepted publickey for core from 147.75.109.163 port 50822 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:08.730638 sshd[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:08.740593 systemd-logind[1995]: New session 17 of user core. Apr 30 00:46:08.748892 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:46:09.114874 sshd[4942]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:09.123753 systemd[1]: sshd@16-172.31.24.0:22-147.75.109.163:50822.service: Deactivated successfully. Apr 30 00:46:09.127931 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:46:09.130329 systemd-logind[1995]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:46:09.133219 systemd-logind[1995]: Removed session 17. Apr 30 00:46:09.176080 systemd[1]: Started sshd@17-172.31.24.0:22-147.75.109.163:50838.service - OpenSSH per-connection server daemon (147.75.109.163:50838). Apr 30 00:46:09.441850 sshd[4953]: Accepted publickey for core from 147.75.109.163 port 50838 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:09.445623 sshd[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:09.455138 systemd-logind[1995]: New session 18 of user core. Apr 30 00:46:09.464912 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:46:12.177795 sshd[4953]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:12.189894 systemd[1]: sshd@17-172.31.24.0:22-147.75.109.163:50838.service: Deactivated successfully. Apr 30 00:46:12.198861 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:46:12.205335 systemd-logind[1995]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:46:12.209402 systemd-logind[1995]: Removed session 18. Apr 30 00:46:12.237140 systemd[1]: Started sshd@18-172.31.24.0:22-147.75.109.163:50850.service - OpenSSH per-connection server daemon (147.75.109.163:50850). Apr 30 00:46:12.509350 sshd[4973]: Accepted publickey for core from 147.75.109.163 port 50850 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:12.512574 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:12.521890 systemd-logind[1995]: New session 19 of user core. Apr 30 00:46:12.533972 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:46:13.104340 sshd[4973]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:13.111759 systemd[1]: sshd@18-172.31.24.0:22-147.75.109.163:50850.service: Deactivated successfully. Apr 30 00:46:13.115326 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:46:13.118774 systemd-logind[1995]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:46:13.121209 systemd-logind[1995]: Removed session 19. Apr 30 00:46:13.163102 systemd[1]: Started sshd@19-172.31.24.0:22-147.75.109.163:50860.service - OpenSSH per-connection server daemon (147.75.109.163:50860). Apr 30 00:46:13.421629 sshd[4984]: Accepted publickey for core from 147.75.109.163 port 50860 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:13.424887 sshd[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:13.433807 systemd-logind[1995]: New session 20 of user core. Apr 30 00:46:13.445852 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:46:13.734878 sshd[4984]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:13.740017 systemd[1]: sshd@19-172.31.24.0:22-147.75.109.163:50860.service: Deactivated successfully. Apr 30 00:46:13.743994 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:46:13.748262 systemd-logind[1995]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:46:13.751188 systemd-logind[1995]: Removed session 20. Apr 30 00:46:18.793688 systemd[1]: Started sshd@20-172.31.24.0:22-147.75.109.163:34176.service - OpenSSH per-connection server daemon (147.75.109.163:34176). Apr 30 00:46:19.052636 sshd[4996]: Accepted publickey for core from 147.75.109.163 port 34176 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:19.056003 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:19.065717 systemd-logind[1995]: New session 21 of user core. Apr 30 00:46:19.071806 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:46:19.371215 sshd[4996]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:19.380113 systemd[1]: sshd@20-172.31.24.0:22-147.75.109.163:34176.service: Deactivated successfully. Apr 30 00:46:19.389581 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:46:19.391545 systemd-logind[1995]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:46:19.394005 systemd-logind[1995]: Removed session 21. Apr 30 00:46:24.427028 systemd[1]: Started sshd@21-172.31.24.0:22-147.75.109.163:34190.service - OpenSSH per-connection server daemon (147.75.109.163:34190). Apr 30 00:46:24.695367 sshd[5013]: Accepted publickey for core from 147.75.109.163 port 34190 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:24.698423 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:24.707108 systemd-logind[1995]: New session 22 of user core. Apr 30 00:46:24.715821 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:46:25.011194 sshd[5013]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:25.020744 systemd[1]: sshd@21-172.31.24.0:22-147.75.109.163:34190.service: Deactivated successfully. Apr 30 00:46:25.026560 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:46:25.028789 systemd-logind[1995]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:46:25.032584 systemd-logind[1995]: Removed session 22. Apr 30 00:46:30.069086 systemd[1]: Started sshd@22-172.31.24.0:22-147.75.109.163:51090.service - OpenSSH per-connection server daemon (147.75.109.163:51090). Apr 30 00:46:30.333954 sshd[5025]: Accepted publickey for core from 147.75.109.163 port 51090 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:30.336825 sshd[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:30.346739 systemd-logind[1995]: New session 23 of user core. Apr 30 00:46:30.353824 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:46:30.652401 sshd[5025]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:30.661136 systemd-logind[1995]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:46:30.662593 systemd[1]: sshd@22-172.31.24.0:22-147.75.109.163:51090.service: Deactivated successfully. Apr 30 00:46:30.667764 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:46:30.671051 systemd-logind[1995]: Removed session 23. Apr 30 00:46:30.706095 systemd[1]: Started sshd@23-172.31.24.0:22-147.75.109.163:51106.service - OpenSSH per-connection server daemon (147.75.109.163:51106). Apr 30 00:46:30.973612 sshd[5040]: Accepted publickey for core from 147.75.109.163 port 51106 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:30.976134 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:30.984842 systemd-logind[1995]: New session 24 of user core. Apr 30 00:46:30.993923 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:46:33.562489 containerd[2021]: time="2025-04-30T00:46:33.562365158Z" level=info msg="StopContainer for \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\" with timeout 30 (s)" Apr 30 00:46:33.567907 containerd[2021]: time="2025-04-30T00:46:33.567805262Z" level=info msg="Stop container \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\" with signal terminated" Apr 30 00:46:33.635658 containerd[2021]: time="2025-04-30T00:46:33.635574986Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:46:33.679596 containerd[2021]: time="2025-04-30T00:46:33.677190086Z" level=info msg="StopContainer for \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\" with timeout 2 (s)" Apr 30 00:46:33.680471 containerd[2021]: time="2025-04-30T00:46:33.680423138Z" level=info msg="Stop container \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\" with signal terminated" Apr 30 00:46:33.703167 systemd-networkd[1846]: lxc_health: Link DOWN Apr 30 00:46:33.704045 systemd-networkd[1846]: lxc_health: Lost carrier Apr 30 00:46:33.730603 systemd[1]: cri-containerd-0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2.scope: Deactivated successfully. Apr 30 00:46:33.753952 systemd[1]: cri-containerd-65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541.scope: Deactivated successfully. Apr 30 00:46:33.755499 systemd[1]: cri-containerd-65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541.scope: Consumed 15.303s CPU time. Apr 30 00:46:33.797452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2-rootfs.mount: Deactivated successfully. Apr 30 00:46:33.815724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541-rootfs.mount: Deactivated successfully. Apr 30 00:46:33.821113 containerd[2021]: time="2025-04-30T00:46:33.821035887Z" level=info msg="shim disconnected" id=0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2 namespace=k8s.io Apr 30 00:46:33.821489 containerd[2021]: time="2025-04-30T00:46:33.821204955Z" level=warning msg="cleaning up after shim disconnected" id=0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2 namespace=k8s.io Apr 30 00:46:33.821489 containerd[2021]: time="2025-04-30T00:46:33.821231679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:33.827787 containerd[2021]: time="2025-04-30T00:46:33.827668755Z" level=info msg="shim disconnected" id=65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541 namespace=k8s.io Apr 30 00:46:33.827787 containerd[2021]: time="2025-04-30T00:46:33.827753871Z" level=warning msg="cleaning up after shim disconnected" id=65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541 namespace=k8s.io Apr 30 00:46:33.827787 containerd[2021]: time="2025-04-30T00:46:33.827788455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:33.861901 containerd[2021]: time="2025-04-30T00:46:33.861843963Z" level=info msg="StopContainer for \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\" returns successfully" Apr 30 00:46:33.863178 containerd[2021]: time="2025-04-30T00:46:33.863127699Z" level=info msg="StopPodSandbox for \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\"" Apr 30 00:46:33.863499 containerd[2021]: time="2025-04-30T00:46:33.863461083Z" level=info msg="Container to stop \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:33.868912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d-shm.mount: Deactivated successfully. Apr 30 00:46:33.874744 containerd[2021]: time="2025-04-30T00:46:33.874639563Z" level=info msg="StopContainer for \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\" returns successfully" Apr 30 00:46:33.875582 containerd[2021]: time="2025-04-30T00:46:33.875358159Z" level=info msg="StopPodSandbox for \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\"" Apr 30 00:46:33.875582 containerd[2021]: time="2025-04-30T00:46:33.875427495Z" level=info msg="Container to stop \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:33.875582 containerd[2021]: time="2025-04-30T00:46:33.875452995Z" level=info msg="Container to stop \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:33.875582 containerd[2021]: time="2025-04-30T00:46:33.875476071Z" level=info msg="Container to stop \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:33.875582 containerd[2021]: time="2025-04-30T00:46:33.875501055Z" level=info msg="Container to stop \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:33.875582 containerd[2021]: time="2025-04-30T00:46:33.875550627Z" level=info msg="Container to stop \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:33.884147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11-shm.mount: Deactivated successfully. Apr 30 00:46:33.887995 systemd[1]: cri-containerd-fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d.scope: Deactivated successfully. Apr 30 00:46:33.905565 systemd[1]: cri-containerd-820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11.scope: Deactivated successfully. Apr 30 00:46:33.951169 containerd[2021]: time="2025-04-30T00:46:33.951079863Z" level=info msg="shim disconnected" id=fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d namespace=k8s.io Apr 30 00:46:33.951169 containerd[2021]: time="2025-04-30T00:46:33.951168615Z" level=warning msg="cleaning up after shim disconnected" id=fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d namespace=k8s.io Apr 30 00:46:33.951169 containerd[2021]: time="2025-04-30T00:46:33.951191619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:33.952174 containerd[2021]: time="2025-04-30T00:46:33.951642699Z" level=info msg="shim disconnected" id=820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11 namespace=k8s.io Apr 30 00:46:33.952174 containerd[2021]: time="2025-04-30T00:46:33.951702687Z" level=warning msg="cleaning up after shim disconnected" id=820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11 namespace=k8s.io Apr 30 00:46:33.952174 containerd[2021]: time="2025-04-30T00:46:33.951726843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:33.984845 containerd[2021]: time="2025-04-30T00:46:33.984777772Z" level=info msg="TearDown network for sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" successfully" Apr 30 00:46:33.985203 containerd[2021]: time="2025-04-30T00:46:33.985020640Z" level=info msg="StopPodSandbox for \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" returns successfully" Apr 30 00:46:34.001874 containerd[2021]: time="2025-04-30T00:46:34.001731960Z" level=info msg="TearDown network for sandbox \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\" successfully" Apr 30 00:46:34.001874 containerd[2021]: time="2025-04-30T00:46:34.001791636Z" level=info msg="StopPodSandbox for \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\" returns successfully" Apr 30 00:46:34.101569 kubelet[3266]: I0430 00:46:34.099649 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-hostproc\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.101569 kubelet[3266]: I0430 00:46:34.099716 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4663941c-a276-4655-8c11-f802888445f8-cilium-config-path\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.101569 kubelet[3266]: I0430 00:46:34.099755 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-lib-modules\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.101569 kubelet[3266]: I0430 00:46:34.099791 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-hubble-tls\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.101569 kubelet[3266]: I0430 00:46:34.099825 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-etc-cni-netd\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.101569 kubelet[3266]: I0430 00:46:34.099855 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-bpf-maps\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.102402 kubelet[3266]: I0430 00:46:34.099892 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tkr9\" (UniqueName: \"kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-kube-api-access-8tkr9\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.102402 kubelet[3266]: I0430 00:46:34.099929 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-host-proc-sys-kernel\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.102402 kubelet[3266]: I0430 00:46:34.099961 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-xtables-lock\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.102402 kubelet[3266]: I0430 00:46:34.099991 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cilium-cgroup\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.102402 kubelet[3266]: I0430 00:46:34.100046 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w72h5\" (UniqueName: \"kubernetes.io/projected/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0-kube-api-access-w72h5\") pod \"9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0\" (UID: \"9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0\") " Apr 30 00:46:34.102402 kubelet[3266]: I0430 00:46:34.100088 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4663941c-a276-4655-8c11-f802888445f8-clustermesh-secrets\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.104668 kubelet[3266]: I0430 00:46:34.100124 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-host-proc-sys-net\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.104668 kubelet[3266]: I0430 00:46:34.100160 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cilium-run\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.104668 kubelet[3266]: I0430 00:46:34.100199 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0-cilium-config-path\") pod \"9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0\" (UID: \"9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0\") " Apr 30 00:46:34.104668 kubelet[3266]: I0430 00:46:34.100238 3266 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cni-path\") pod \"4663941c-a276-4655-8c11-f802888445f8\" (UID: \"4663941c-a276-4655-8c11-f802888445f8\") " Apr 30 00:46:34.104668 kubelet[3266]: I0430 00:46:34.100155 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.104668 kubelet[3266]: I0430 00:46:34.100191 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.104997 kubelet[3266]: I0430 00:46:34.100319 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.104997 kubelet[3266]: I0430 00:46:34.101273 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.104997 kubelet[3266]: I0430 00:46:34.101352 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.104997 kubelet[3266]: I0430 00:46:34.101430 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.104997 kubelet[3266]: I0430 00:46:34.101459 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.105262 kubelet[3266]: I0430 00:46:34.102912 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.105262 kubelet[3266]: I0430 00:46:34.102967 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.105262 kubelet[3266]: I0430 00:46:34.102981 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:34.117684 kubelet[3266]: I0430 00:46:34.116682 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4663941c-a276-4655-8c11-f802888445f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:46:34.117684 kubelet[3266]: I0430 00:46:34.116695 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0-kube-api-access-w72h5" (OuterVolumeSpecName: "kube-api-access-w72h5") pod "9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0" (UID: "9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0"). InnerVolumeSpecName "kube-api-access-w72h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:46:34.118602 kubelet[3266]: I0430 00:46:34.117584 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4663941c-a276-4655-8c11-f802888445f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:46:34.120895 kubelet[3266]: I0430 00:46:34.120839 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0" (UID: "9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:46:34.121352 kubelet[3266]: I0430 00:46:34.121274 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-kube-api-access-8tkr9" (OuterVolumeSpecName: "kube-api-access-8tkr9") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "kube-api-access-8tkr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:46:34.121494 kubelet[3266]: I0430 00:46:34.121459 3266 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4663941c-a276-4655-8c11-f802888445f8" (UID: "4663941c-a276-4655-8c11-f802888445f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:46:34.201226 kubelet[3266]: I0430 00:46:34.201160 3266 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-host-proc-sys-kernel\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201226 kubelet[3266]: I0430 00:46:34.201217 3266 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-xtables-lock\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201429 kubelet[3266]: I0430 00:46:34.201241 3266 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4663941c-a276-4655-8c11-f802888445f8-clustermesh-secrets\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201429 kubelet[3266]: I0430 00:46:34.201261 3266 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-host-proc-sys-net\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201429 kubelet[3266]: I0430 00:46:34.201281 3266 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cilium-cgroup\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201429 kubelet[3266]: I0430 00:46:34.201300 3266 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w72h5\" (UniqueName: \"kubernetes.io/projected/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0-kube-api-access-w72h5\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201429 kubelet[3266]: I0430 00:46:34.201319 3266 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0-cilium-config-path\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201429 kubelet[3266]: I0430 00:46:34.201338 3266 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cilium-run\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201429 kubelet[3266]: I0430 00:46:34.201389 3266 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-cni-path\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201429 kubelet[3266]: I0430 00:46:34.201412 3266 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-lib-modules\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201892 kubelet[3266]: I0430 00:46:34.201431 3266 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-hostproc\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201892 kubelet[3266]: I0430 00:46:34.201453 3266 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4663941c-a276-4655-8c11-f802888445f8-cilium-config-path\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201892 kubelet[3266]: I0430 00:46:34.201472 3266 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-hubble-tls\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201892 kubelet[3266]: I0430 00:46:34.201490 3266 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-bpf-maps\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201892 kubelet[3266]: I0430 00:46:34.201545 3266 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8tkr9\" (UniqueName: \"kubernetes.io/projected/4663941c-a276-4655-8c11-f802888445f8-kube-api-access-8tkr9\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.201892 kubelet[3266]: I0430 00:46:34.201571 3266 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4663941c-a276-4655-8c11-f802888445f8-etc-cni-netd\") on node \"ip-172-31-24-0\" DevicePath \"\"" Apr 30 00:46:34.229616 systemd[1]: Removed slice kubepods-besteffort-pod9a8dedf3_c2f4_4ec5_9f9f_36506e3bdea0.slice - libcontainer container kubepods-besteffort-pod9a8dedf3_c2f4_4ec5_9f9f_36506e3bdea0.slice. Apr 30 00:46:34.233189 systemd[1]: Removed slice kubepods-burstable-pod4663941c_a276_4655_8c11_f802888445f8.slice - libcontainer container kubepods-burstable-pod4663941c_a276_4655_8c11_f802888445f8.slice. Apr 30 00:46:34.233407 systemd[1]: kubepods-burstable-pod4663941c_a276_4655_8c11_f802888445f8.slice: Consumed 15.455s CPU time. Apr 30 00:46:34.583348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11-rootfs.mount: Deactivated successfully. Apr 30 00:46:34.583578 systemd[1]: var-lib-kubelet-pods-4663941c\x2da276\x2d4655\x2d8c11\x2df802888445f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8tkr9.mount: Deactivated successfully. Apr 30 00:46:34.584229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d-rootfs.mount: Deactivated successfully. Apr 30 00:46:34.584377 systemd[1]: var-lib-kubelet-pods-9a8dedf3\x2dc2f4\x2d4ec5\x2d9f9f\x2d36506e3bdea0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw72h5.mount: Deactivated successfully. Apr 30 00:46:34.584540 systemd[1]: var-lib-kubelet-pods-4663941c\x2da276\x2d4655\x2d8c11\x2df802888445f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:46:34.585105 systemd[1]: var-lib-kubelet-pods-4663941c\x2da276\x2d4655\x2d8c11\x2df802888445f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:46:34.727552 kubelet[3266]: I0430 00:46:34.726638 3266 scope.go:117] "RemoveContainer" containerID="65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541" Apr 30 00:46:34.736967 containerd[2021]: time="2025-04-30T00:46:34.736539735Z" level=info msg="RemoveContainer for \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\"" Apr 30 00:46:34.751023 containerd[2021]: time="2025-04-30T00:46:34.750401559Z" level=info msg="RemoveContainer for \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\" returns successfully" Apr 30 00:46:34.751577 kubelet[3266]: I0430 00:46:34.751493 3266 scope.go:117] "RemoveContainer" containerID="6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c" Apr 30 00:46:34.758023 containerd[2021]: time="2025-04-30T00:46:34.757704267Z" level=info msg="RemoveContainer for \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\"" Apr 30 00:46:34.766267 containerd[2021]: time="2025-04-30T00:46:34.766054539Z" level=info msg="RemoveContainer for \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\" returns successfully" Apr 30 00:46:34.766484 kubelet[3266]: I0430 00:46:34.766401 3266 scope.go:117] "RemoveContainer" containerID="3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b" Apr 30 00:46:34.772655 containerd[2021]: time="2025-04-30T00:46:34.772124320Z" level=info msg="RemoveContainer for \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\"" Apr 30 00:46:34.778346 containerd[2021]: time="2025-04-30T00:46:34.778268464Z" level=info msg="RemoveContainer for \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\" returns successfully" Apr 30 00:46:34.779134 kubelet[3266]: I0430 00:46:34.778621 3266 scope.go:117] "RemoveContainer" containerID="ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a" Apr 30 00:46:34.780459 containerd[2021]: time="2025-04-30T00:46:34.780413680Z" level=info msg="RemoveContainer for \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\"" Apr 30 00:46:34.787260 containerd[2021]: time="2025-04-30T00:46:34.787127884Z" level=info msg="RemoveContainer for \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\" returns successfully" Apr 30 00:46:34.787781 kubelet[3266]: I0430 00:46:34.787675 3266 scope.go:117] "RemoveContainer" containerID="81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df" Apr 30 00:46:34.792299 containerd[2021]: time="2025-04-30T00:46:34.791848300Z" level=info msg="RemoveContainer for \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\"" Apr 30 00:46:34.798799 containerd[2021]: time="2025-04-30T00:46:34.798666760Z" level=info msg="RemoveContainer for \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\" returns successfully" Apr 30 00:46:34.799631 kubelet[3266]: I0430 00:46:34.799147 3266 scope.go:117] "RemoveContainer" containerID="65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541" Apr 30 00:46:34.799756 containerd[2021]: time="2025-04-30T00:46:34.799498012Z" level=error msg="ContainerStatus for \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\": not found" Apr 30 00:46:34.800391 kubelet[3266]: E0430 00:46:34.800068 3266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\": not found" containerID="65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541" Apr 30 00:46:34.800391 kubelet[3266]: I0430 00:46:34.800121 3266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541"} err="failed to get container status \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\": rpc error: code = NotFound desc = an error occurred when try to find container \"65fe88210e8c3a64091a9774b7a3eda8b57a9bffe82659f04e9b935f3a98d541\": not found" Apr 30 00:46:34.800391 kubelet[3266]: I0430 00:46:34.800261 3266 scope.go:117] "RemoveContainer" containerID="6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c" Apr 30 00:46:34.801358 containerd[2021]: time="2025-04-30T00:46:34.800890672Z" level=error msg="ContainerStatus for \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\": not found" Apr 30 00:46:34.801487 kubelet[3266]: E0430 00:46:34.801137 3266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\": not found" containerID="6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c" Apr 30 00:46:34.801487 kubelet[3266]: I0430 00:46:34.801181 3266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c"} err="failed to get container status \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6558096cb0a9d0fff221d493ae6d7be7b2100d59327c4baae0dae1aa96cfbc2c\": not found" Apr 30 00:46:34.801487 kubelet[3266]: I0430 00:46:34.801228 3266 scope.go:117] "RemoveContainer" containerID="3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b" Apr 30 00:46:34.802022 containerd[2021]: time="2025-04-30T00:46:34.801881752Z" level=error msg="ContainerStatus for \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\": not found" Apr 30 00:46:34.802249 kubelet[3266]: E0430 00:46:34.802204 3266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\": not found" containerID="3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b" Apr 30 00:46:34.802342 kubelet[3266]: I0430 00:46:34.802261 3266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b"} err="failed to get container status \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bc3c73db50726691d2d1acba5a35e9d2e0336fc4325de4fee85527083b9590b\": not found" Apr 30 00:46:34.802342 kubelet[3266]: I0430 00:46:34.802299 3266 scope.go:117] "RemoveContainer" containerID="ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a" Apr 30 00:46:34.803113 containerd[2021]: time="2025-04-30T00:46:34.802722172Z" level=error msg="ContainerStatus for \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\": not found" Apr 30 00:46:34.803220 kubelet[3266]: E0430 00:46:34.802932 3266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\": not found" containerID="ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a" Apr 30 00:46:34.803220 kubelet[3266]: I0430 00:46:34.802971 3266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a"} err="failed to get container status \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed9cc214595e6909484a206f307e22d162cc4e6d826c6687dfec487ef610e28a\": not found" Apr 30 00:46:34.803220 kubelet[3266]: I0430 00:46:34.803002 3266 scope.go:117] "RemoveContainer" containerID="81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df" Apr 30 00:46:34.803435 containerd[2021]: time="2025-04-30T00:46:34.803261956Z" level=error msg="ContainerStatus for \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\": not found" Apr 30 00:46:34.803938 kubelet[3266]: E0430 00:46:34.803749 3266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\": not found" containerID="81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df" Apr 30 00:46:34.803938 kubelet[3266]: I0430 00:46:34.803792 3266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df"} err="failed to get container status \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\": rpc error: code = NotFound desc = an error occurred when try to find container \"81bb9c10a9ba21b7bd2a367558c2c9a0c2fcc261ea936036c67a6d714d5ce5df\": not found" Apr 30 00:46:34.803938 kubelet[3266]: I0430 00:46:34.803821 3266 scope.go:117] "RemoveContainer" containerID="0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2" Apr 30 00:46:34.806315 containerd[2021]: time="2025-04-30T00:46:34.805885768Z" level=info msg="RemoveContainer for \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\"" Apr 30 00:46:34.811843 containerd[2021]: time="2025-04-30T00:46:34.811718692Z" level=info msg="RemoveContainer for \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\" returns successfully" Apr 30 00:46:34.812235 kubelet[3266]: I0430 00:46:34.812199 3266 scope.go:117] "RemoveContainer" containerID="0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2" Apr 30 00:46:34.812647 containerd[2021]: time="2025-04-30T00:46:34.812584972Z" level=error msg="ContainerStatus for \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\": not found" Apr 30 00:46:34.812897 kubelet[3266]: E0430 00:46:34.812826 3266 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\": not found" containerID="0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2" Apr 30 00:46:34.812897 kubelet[3266]: I0430 00:46:34.812867 3266 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2"} err="failed to get container status \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c20db5ea9f7f3c5c971eef2c005ba3af32646ac9463697c60d3b8ae55be5aa2\": not found" Apr 30 00:46:35.485920 sshd[5040]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:35.492717 systemd[1]: sshd@23-172.31.24.0:22-147.75.109.163:51106.service: Deactivated successfully. Apr 30 00:46:35.498290 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:46:35.498940 systemd[1]: session-24.scope: Consumed 1.760s CPU time. Apr 30 00:46:35.500045 systemd-logind[1995]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:46:35.502033 systemd-logind[1995]: Removed session 24. Apr 30 00:46:35.538080 systemd[1]: Started sshd@24-172.31.24.0:22-147.75.109.163:51110.service - OpenSSH per-connection server daemon (147.75.109.163:51110). Apr 30 00:46:35.803433 sshd[5203]: Accepted publickey for core from 147.75.109.163 port 51110 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:35.806609 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:35.816165 systemd-logind[1995]: New session 25 of user core. Apr 30 00:46:35.823904 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:46:36.176829 ntpd[1988]: Deleting interface #11 lxc_health, fe80::b0de:f5ff:fe20:9799%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Apr 30 00:46:36.177461 ntpd[1988]: 30 Apr 00:46:36 ntpd[1988]: Deleting interface #11 lxc_health, fe80::b0de:f5ff:fe20:9799%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Apr 30 00:46:36.220560 kubelet[3266]: I0430 00:46:36.220069 3266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4663941c-a276-4655-8c11-f802888445f8" path="/var/lib/kubelet/pods/4663941c-a276-4655-8c11-f802888445f8/volumes" Apr 30 00:46:36.222832 kubelet[3266]: I0430 00:46:36.222258 3266 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0" path="/var/lib/kubelet/pods/9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0/volumes" Apr 30 00:46:37.449603 kubelet[3266]: E0430 00:46:37.449467 3266 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:46:37.953820 sshd[5203]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:37.968397 systemd-logind[1995]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:46:37.971571 systemd[1]: sshd@24-172.31.24.0:22-147.75.109.163:51110.service: Deactivated successfully. Apr 30 00:46:37.978721 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:46:37.982038 systemd[1]: session-25.scope: Consumed 1.864s CPU time. Apr 30 00:46:37.986448 systemd-logind[1995]: Removed session 25. Apr 30 00:46:38.013399 systemd[1]: Started sshd@25-172.31.24.0:22-147.75.109.163:34740.service - OpenSSH per-connection server daemon (147.75.109.163:34740). Apr 30 00:46:38.043087 kubelet[3266]: I0430 00:46:38.043007 3266 topology_manager.go:215] "Topology Admit Handler" podUID="32e0f42a-7dbf-4c18-a4f1-19957bd7a31e" podNamespace="kube-system" podName="cilium-wf4nb" Apr 30 00:46:38.043224 kubelet[3266]: E0430 00:46:38.043126 3266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0" containerName="cilium-operator" Apr 30 00:46:38.043224 kubelet[3266]: E0430 00:46:38.043174 3266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4663941c-a276-4655-8c11-f802888445f8" containerName="apply-sysctl-overwrites" Apr 30 00:46:38.043224 kubelet[3266]: E0430 00:46:38.043192 3266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4663941c-a276-4655-8c11-f802888445f8" containerName="clean-cilium-state" Apr 30 00:46:38.043224 kubelet[3266]: E0430 00:46:38.043207 3266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4663941c-a276-4655-8c11-f802888445f8" containerName="cilium-agent" Apr 30 00:46:38.043545 kubelet[3266]: E0430 00:46:38.043246 3266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4663941c-a276-4655-8c11-f802888445f8" containerName="mount-cgroup" Apr 30 00:46:38.043545 kubelet[3266]: E0430 00:46:38.043268 3266 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4663941c-a276-4655-8c11-f802888445f8" containerName="mount-bpf-fs" Apr 30 00:46:38.043545 kubelet[3266]: I0430 00:46:38.043351 3266 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8dedf3-c2f4-4ec5-9f9f-36506e3bdea0" containerName="cilium-operator" Apr 30 00:46:38.043545 kubelet[3266]: I0430 00:46:38.043369 3266 memory_manager.go:354] "RemoveStaleState removing state" podUID="4663941c-a276-4655-8c11-f802888445f8" containerName="cilium-agent" Apr 30 00:46:38.073152 kubelet[3266]: W0430 00:46:38.072980 3266 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-24-0" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-0' and this object Apr 30 00:46:38.073152 kubelet[3266]: E0430 00:46:38.073101 3266 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-24-0" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-0' and this object Apr 30 00:46:38.073615 systemd[1]: Created slice kubepods-burstable-pod32e0f42a_7dbf_4c18_a4f1_19957bd7a31e.slice - libcontainer container kubepods-burstable-pod32e0f42a_7dbf_4c18_a4f1_19957bd7a31e.slice. Apr 30 00:46:38.133087 kubelet[3266]: I0430 00:46:38.131762 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-cilium-ipsec-secrets\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133087 kubelet[3266]: I0430 00:46:38.131853 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-cilium-run\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133087 kubelet[3266]: I0430 00:46:38.131893 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-xtables-lock\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133087 kubelet[3266]: I0430 00:46:38.131932 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-cilium-config-path\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133087 kubelet[3266]: I0430 00:46:38.131979 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ht7g\" (UniqueName: \"kubernetes.io/projected/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-kube-api-access-6ht7g\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133087 kubelet[3266]: I0430 00:46:38.132019 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-hostproc\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133670 kubelet[3266]: I0430 00:46:38.132057 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-cni-path\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133670 kubelet[3266]: I0430 00:46:38.132093 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-etc-cni-netd\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133670 kubelet[3266]: I0430 00:46:38.132133 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-clustermesh-secrets\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133670 kubelet[3266]: I0430 00:46:38.132171 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-host-proc-sys-kernel\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133670 kubelet[3266]: I0430 00:46:38.132203 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-hubble-tls\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.133670 kubelet[3266]: I0430 00:46:38.132237 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-bpf-maps\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.134021 kubelet[3266]: I0430 00:46:38.132277 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-cilium-cgroup\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.134021 kubelet[3266]: I0430 00:46:38.132308 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-lib-modules\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.134021 kubelet[3266]: I0430 00:46:38.132341 3266 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-host-proc-sys-net\") pod \"cilium-wf4nb\" (UID: \"32e0f42a-7dbf-4c18-a4f1-19957bd7a31e\") " pod="kube-system/cilium-wf4nb" Apr 30 00:46:38.314839 sshd[5215]: Accepted publickey for core from 147.75.109.163 port 34740 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:38.317741 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:38.326632 systemd-logind[1995]: New session 26 of user core. Apr 30 00:46:38.334768 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:46:38.506656 sshd[5215]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:38.514576 systemd[1]: sshd@25-172.31.24.0:22-147.75.109.163:34740.service: Deactivated successfully. Apr 30 00:46:38.518769 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:46:38.520709 systemd-logind[1995]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:46:38.522930 systemd-logind[1995]: Removed session 26. Apr 30 00:46:38.565251 systemd[1]: Started sshd@26-172.31.24.0:22-147.75.109.163:34754.service - OpenSSH per-connection server daemon (147.75.109.163:34754). Apr 30 00:46:38.825786 sshd[5226]: Accepted publickey for core from 147.75.109.163 port 34754 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:38.828490 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:38.836893 systemd-logind[1995]: New session 27 of user core. Apr 30 00:46:38.846657 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:46:39.237731 kubelet[3266]: E0430 00:46:39.236791 3266 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 30 00:46:39.237731 kubelet[3266]: E0430 00:46:39.236927 3266 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-clustermesh-secrets podName:32e0f42a-7dbf-4c18-a4f1-19957bd7a31e nodeName:}" failed. No retries permitted until 2025-04-30 00:46:39.736899014 +0000 UTC m=+117.765768856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/32e0f42a-7dbf-4c18-a4f1-19957bd7a31e-clustermesh-secrets") pod "cilium-wf4nb" (UID: "32e0f42a-7dbf-4c18-a4f1-19957bd7a31e") : failed to sync secret cache: timed out waiting for the condition Apr 30 00:46:39.881374 containerd[2021]: time="2025-04-30T00:46:39.881295489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wf4nb,Uid:32e0f42a-7dbf-4c18-a4f1-19957bd7a31e,Namespace:kube-system,Attempt:0,}" Apr 30 00:46:39.927491 containerd[2021]: time="2025-04-30T00:46:39.926264493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:46:39.927491 containerd[2021]: time="2025-04-30T00:46:39.927170217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:46:39.927491 containerd[2021]: time="2025-04-30T00:46:39.927201885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:39.927491 containerd[2021]: time="2025-04-30T00:46:39.927440325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:39.971061 systemd[1]: Started cri-containerd-cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4.scope - libcontainer container cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4. Apr 30 00:46:40.021431 containerd[2021]: time="2025-04-30T00:46:40.021288942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wf4nb,Uid:32e0f42a-7dbf-4c18-a4f1-19957bd7a31e,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\"" Apr 30 00:46:40.029574 containerd[2021]: time="2025-04-30T00:46:40.028743126Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:46:40.070864 containerd[2021]: time="2025-04-30T00:46:40.070799034Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"525f5c44bbbfa93badbadf39c1cc782c194f3721e6cc02d1a474e5de361b2af5\"" Apr 30 00:46:40.072350 containerd[2021]: time="2025-04-30T00:46:40.072238062Z" level=info msg="StartContainer for \"525f5c44bbbfa93badbadf39c1cc782c194f3721e6cc02d1a474e5de361b2af5\"" Apr 30 00:46:40.123838 systemd[1]: Started cri-containerd-525f5c44bbbfa93badbadf39c1cc782c194f3721e6cc02d1a474e5de361b2af5.scope - libcontainer container 525f5c44bbbfa93badbadf39c1cc782c194f3721e6cc02d1a474e5de361b2af5. Apr 30 00:46:40.173190 containerd[2021]: time="2025-04-30T00:46:40.172900374Z" level=info msg="StartContainer for \"525f5c44bbbfa93badbadf39c1cc782c194f3721e6cc02d1a474e5de361b2af5\" returns successfully" Apr 30 00:46:40.191692 systemd[1]: cri-containerd-525f5c44bbbfa93badbadf39c1cc782c194f3721e6cc02d1a474e5de361b2af5.scope: Deactivated successfully. Apr 30 00:46:40.251643 containerd[2021]: time="2025-04-30T00:46:40.251563243Z" level=info msg="shim disconnected" id=525f5c44bbbfa93badbadf39c1cc782c194f3721e6cc02d1a474e5de361b2af5 namespace=k8s.io Apr 30 00:46:40.251643 containerd[2021]: time="2025-04-30T00:46:40.251642431Z" level=warning msg="cleaning up after shim disconnected" id=525f5c44bbbfa93badbadf39c1cc782c194f3721e6cc02d1a474e5de361b2af5 namespace=k8s.io Apr 30 00:46:40.251970 containerd[2021]: time="2025-04-30T00:46:40.251664667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:40.754756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716257065.mount: Deactivated successfully. Apr 30 00:46:40.774838 containerd[2021]: time="2025-04-30T00:46:40.773854137Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:46:40.802102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130063578.mount: Deactivated successfully. Apr 30 00:46:40.812632 containerd[2021]: time="2025-04-30T00:46:40.812491042Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b\"" Apr 30 00:46:40.814351 containerd[2021]: time="2025-04-30T00:46:40.813959326Z" level=info msg="StartContainer for \"590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b\"" Apr 30 00:46:40.882848 systemd[1]: Started cri-containerd-590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b.scope - libcontainer container 590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b. Apr 30 00:46:40.933642 containerd[2021]: time="2025-04-30T00:46:40.933339898Z" level=info msg="StartContainer for \"590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b\" returns successfully" Apr 30 00:46:40.947296 systemd[1]: cri-containerd-590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b.scope: Deactivated successfully. Apr 30 00:46:41.003253 containerd[2021]: time="2025-04-30T00:46:41.002910822Z" level=info msg="shim disconnected" id=590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b namespace=k8s.io Apr 30 00:46:41.003253 containerd[2021]: time="2025-04-30T00:46:41.003032574Z" level=warning msg="cleaning up after shim disconnected" id=590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b namespace=k8s.io Apr 30 00:46:41.003253 containerd[2021]: time="2025-04-30T00:46:41.003056682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:41.754953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-590ec5a3574ccb8d3a9223ad0321414e7b25d558af7771117aac3bdb9f6e287b-rootfs.mount: Deactivated successfully. Apr 30 00:46:41.779868 containerd[2021]: time="2025-04-30T00:46:41.779627662Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:46:41.815088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645905671.mount: Deactivated successfully. Apr 30 00:46:41.828555 containerd[2021]: time="2025-04-30T00:46:41.827996687Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02\"" Apr 30 00:46:41.835709 containerd[2021]: time="2025-04-30T00:46:41.831808259Z" level=info msg="StartContainer for \"b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02\"" Apr 30 00:46:41.903839 systemd[1]: Started cri-containerd-b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02.scope - libcontainer container b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02. Apr 30 00:46:41.957954 containerd[2021]: time="2025-04-30T00:46:41.957875351Z" level=info msg="StartContainer for \"b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02\" returns successfully" Apr 30 00:46:41.967578 systemd[1]: cri-containerd-b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02.scope: Deactivated successfully. Apr 30 00:46:42.019638 containerd[2021]: time="2025-04-30T00:46:42.019424624Z" level=info msg="shim disconnected" id=b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02 namespace=k8s.io Apr 30 00:46:42.019638 containerd[2021]: time="2025-04-30T00:46:42.019504328Z" level=warning msg="cleaning up after shim disconnected" id=b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02 namespace=k8s.io Apr 30 00:46:42.019638 containerd[2021]: time="2025-04-30T00:46:42.019545476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:42.206460 containerd[2021]: time="2025-04-30T00:46:42.206399732Z" level=info msg="StopPodSandbox for \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\"" Apr 30 00:46:42.206671 containerd[2021]: time="2025-04-30T00:46:42.206579300Z" level=info msg="TearDown network for sandbox \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\" successfully" Apr 30 00:46:42.206671 containerd[2021]: time="2025-04-30T00:46:42.206607644Z" level=info msg="StopPodSandbox for \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\" returns successfully" Apr 30 00:46:42.207372 containerd[2021]: time="2025-04-30T00:46:42.207237524Z" level=info msg="RemovePodSandbox for \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\"" Apr 30 00:46:42.207372 containerd[2021]: time="2025-04-30T00:46:42.207303512Z" level=info msg="Forcibly stopping sandbox \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\"" Apr 30 00:46:42.207720 containerd[2021]: time="2025-04-30T00:46:42.207403700Z" level=info msg="TearDown network for sandbox \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\" successfully" Apr 30 00:46:42.212898 containerd[2021]: time="2025-04-30T00:46:42.212805272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:42.213037 containerd[2021]: time="2025-04-30T00:46:42.212905664Z" level=info msg="RemovePodSandbox \"fc18d9e4f28801f9301538410aa81d4edd4de20e3358fb28e84eabd66516da6d\" returns successfully" Apr 30 00:46:42.214371 containerd[2021]: time="2025-04-30T00:46:42.213808604Z" level=info msg="StopPodSandbox for \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\"" Apr 30 00:46:42.214371 containerd[2021]: time="2025-04-30T00:46:42.213968672Z" level=info msg="TearDown network for sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" successfully" Apr 30 00:46:42.214371 containerd[2021]: time="2025-04-30T00:46:42.213996512Z" level=info msg="StopPodSandbox for \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" returns successfully" Apr 30 00:46:42.215435 containerd[2021]: time="2025-04-30T00:46:42.215347232Z" level=info msg="RemovePodSandbox for \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\"" Apr 30 00:46:42.215874 containerd[2021]: time="2025-04-30T00:46:42.215637992Z" level=info msg="Forcibly stopping sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\"" Apr 30 00:46:42.215874 containerd[2021]: time="2025-04-30T00:46:42.215798264Z" level=info msg="TearDown network for sandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" successfully" Apr 30 00:46:42.224374 containerd[2021]: time="2025-04-30T00:46:42.224132637Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:42.224374 containerd[2021]: time="2025-04-30T00:46:42.224219961Z" level=info msg="RemovePodSandbox \"820af9907e2c5b37e73ea97658469b2fed6cf860fd3ceb5e990da781b7692d11\" returns successfully" Apr 30 00:46:42.451017 kubelet[3266]: E0430 00:46:42.450964 3266 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:46:42.755582 systemd[1]: run-containerd-runc-k8s.io-b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02-runc.REV1Ku.mount: Deactivated successfully. Apr 30 00:46:42.756373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5612d3c9809c3e7352bd4113ec6fa572f2854c77e6651bec2423676ada1cd02-rootfs.mount: Deactivated successfully. Apr 30 00:46:42.787212 containerd[2021]: time="2025-04-30T00:46:42.787096535Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:46:42.844689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853485541.mount: Deactivated successfully. Apr 30 00:46:42.846354 containerd[2021]: time="2025-04-30T00:46:42.845379372Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b\"" Apr 30 00:46:42.850126 containerd[2021]: time="2025-04-30T00:46:42.849543444Z" level=info msg="StartContainer for \"69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b\"" Apr 30 00:46:42.918905 systemd[1]: Started cri-containerd-69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b.scope - libcontainer container 69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b. Apr 30 00:46:42.981923 systemd[1]: cri-containerd-69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b.scope: Deactivated successfully. Apr 30 00:46:42.985183 containerd[2021]: time="2025-04-30T00:46:42.984920436Z" level=info msg="StartContainer for \"69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b\" returns successfully" Apr 30 00:46:43.052780 containerd[2021]: time="2025-04-30T00:46:43.052456929Z" level=info msg="shim disconnected" id=69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b namespace=k8s.io Apr 30 00:46:43.052780 containerd[2021]: time="2025-04-30T00:46:43.052643385Z" level=warning msg="cleaning up after shim disconnected" id=69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b namespace=k8s.io Apr 30 00:46:43.052780 containerd[2021]: time="2025-04-30T00:46:43.052677309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:43.755606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69f5f9bc431a26c9d3b32386763da52c1732a9d82c8b5444a9d031bbe3cbd62b-rootfs.mount: Deactivated successfully. Apr 30 00:46:43.794898 containerd[2021]: time="2025-04-30T00:46:43.794263212Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:46:43.830456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738811537.mount: Deactivated successfully. Apr 30 00:46:43.841441 containerd[2021]: time="2025-04-30T00:46:43.841284421Z" level=info msg="CreateContainer within sandbox \"cbf69125ff36d2e590fb3d765586d8b70135a5c32b4936f709d161e25d4295c4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b95d776ee5cc2fb8cf86213e8731e22f557fdf4096956c54c95ba5e36006716d\"" Apr 30 00:46:43.848183 containerd[2021]: time="2025-04-30T00:46:43.846366061Z" level=info msg="StartContainer for \"b95d776ee5cc2fb8cf86213e8731e22f557fdf4096956c54c95ba5e36006716d\"" Apr 30 00:46:43.917005 systemd[1]: Started cri-containerd-b95d776ee5cc2fb8cf86213e8731e22f557fdf4096956c54c95ba5e36006716d.scope - libcontainer container b95d776ee5cc2fb8cf86213e8731e22f557fdf4096956c54c95ba5e36006716d. Apr 30 00:46:43.980028 containerd[2021]: time="2025-04-30T00:46:43.979556497Z" level=info msg="StartContainer for \"b95d776ee5cc2fb8cf86213e8731e22f557fdf4096956c54c95ba5e36006716d\" returns successfully" Apr 30 00:46:44.580655 kubelet[3266]: I0430 00:46:44.580259 3266 setters.go:580] "Node became not ready" node="ip-172-31-24-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T00:46:44Z","lastTransitionTime":"2025-04-30T00:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 00:46:44.759425 systemd[1]: run-containerd-runc-k8s.io-b95d776ee5cc2fb8cf86213e8731e22f557fdf4096956c54c95ba5e36006716d-runc.ZggTKA.mount: Deactivated successfully. Apr 30 00:46:44.789795 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 00:46:46.215097 kubelet[3266]: E0430 00:46:46.214696 3266 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-82f2v" podUID="bc2966f1-b7f2-4e51-864d-842f211314f4" Apr 30 00:46:49.208902 (udev-worker)[6065]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:46:49.210949 systemd-networkd[1846]: lxc_health: Link UP Apr 30 00:46:49.221692 (udev-worker)[6066]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:46:49.233588 systemd-networkd[1846]: lxc_health: Gained carrier Apr 30 00:46:49.868546 systemd[1]: run-containerd-runc-k8s.io-b95d776ee5cc2fb8cf86213e8731e22f557fdf4096956c54c95ba5e36006716d-runc.OoW5if.mount: Deactivated successfully. Apr 30 00:46:49.941192 kubelet[3266]: I0430 00:46:49.940711 3266 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wf4nb" podStartSLOduration=12.940689739 podStartE2EDuration="12.940689739s" podCreationTimestamp="2025-04-30 00:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:46:44.840654158 +0000 UTC m=+122.869524012" watchObservedRunningTime="2025-04-30 00:46:49.940689739 +0000 UTC m=+127.969559617" Apr 30 00:46:50.811817 systemd-networkd[1846]: lxc_health: Gained IPv6LL Apr 30 00:46:53.176562 ntpd[1988]: Listen normally on 14 lxc_health [fe80::2837:4bff:fe4b:4c12%14]:123 Apr 30 00:46:53.177108 ntpd[1988]: 30 Apr 00:46:53 ntpd[1988]: Listen normally on 14 lxc_health [fe80::2837:4bff:fe4b:4c12%14]:123 Apr 30 00:46:54.580138 systemd[1]: run-containerd-runc-k8s.io-b95d776ee5cc2fb8cf86213e8731e22f557fdf4096956c54c95ba5e36006716d-runc.D2Yu1G.mount: Deactivated successfully. Apr 30 00:46:54.733591 sshd[5226]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:54.741273 systemd[1]: sshd@26-172.31.24.0:22-147.75.109.163:34754.service: Deactivated successfully. Apr 30 00:46:54.746355 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:46:54.752806 systemd-logind[1995]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:46:54.756095 systemd-logind[1995]: Removed session 27. Apr 30 00:47:08.199649 systemd[1]: cri-containerd-470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f.scope: Deactivated successfully. Apr 30 00:47:08.200266 systemd[1]: cri-containerd-470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f.scope: Consumed 6.236s CPU time, 24.2M memory peak, 0B memory swap peak. Apr 30 00:47:08.254626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f-rootfs.mount: Deactivated successfully. Apr 30 00:47:08.275419 containerd[2021]: time="2025-04-30T00:47:08.275082418Z" level=info msg="shim disconnected" id=470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f namespace=k8s.io Apr 30 00:47:08.275419 containerd[2021]: time="2025-04-30T00:47:08.275177842Z" level=warning msg="cleaning up after shim disconnected" id=470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f namespace=k8s.io Apr 30 00:47:08.275419 containerd[2021]: time="2025-04-30T00:47:08.275197966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:08.878047 kubelet[3266]: I0430 00:47:08.876800 3266 scope.go:117] "RemoveContainer" containerID="470050b82cb0812d879b0b9902490f7e44c2ff5def476f3d8fe3b934fd68625f" Apr 30 00:47:08.880656 containerd[2021]: time="2025-04-30T00:47:08.880603717Z" level=info msg="CreateContainer within sandbox \"02c8542774ca63c0ae8b54c0ecd1c4e7dac81b8256f333c93c50fcbe1f602aaa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 00:47:08.911677 containerd[2021]: time="2025-04-30T00:47:08.911576737Z" level=info msg="CreateContainer within sandbox \"02c8542774ca63c0ae8b54c0ecd1c4e7dac81b8256f333c93c50fcbe1f602aaa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3b8ab78c00fe8f587d38c128eaef431296536391e0546c3e97ef6438be146259\"" Apr 30 00:47:08.912783 containerd[2021]: time="2025-04-30T00:47:08.912713317Z" level=info msg="StartContainer for \"3b8ab78c00fe8f587d38c128eaef431296536391e0546c3e97ef6438be146259\"" Apr 30 00:47:08.966856 systemd[1]: Started cri-containerd-3b8ab78c00fe8f587d38c128eaef431296536391e0546c3e97ef6438be146259.scope - libcontainer container 3b8ab78c00fe8f587d38c128eaef431296536391e0546c3e97ef6438be146259. Apr 30 00:47:09.034377 containerd[2021]: time="2025-04-30T00:47:09.034090786Z" level=info msg="StartContainer for \"3b8ab78c00fe8f587d38c128eaef431296536391e0546c3e97ef6438be146259\" returns successfully" Apr 30 00:47:13.541108 systemd[1]: cri-containerd-ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125.scope: Deactivated successfully. Apr 30 00:47:13.543664 systemd[1]: cri-containerd-ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125.scope: Consumed 3.266s CPU time, 16.4M memory peak, 0B memory swap peak. Apr 30 00:47:13.579933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125-rootfs.mount: Deactivated successfully. Apr 30 00:47:13.592799 containerd[2021]: time="2025-04-30T00:47:13.592560124Z" level=info msg="shim disconnected" id=ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125 namespace=k8s.io Apr 30 00:47:13.592799 containerd[2021]: time="2025-04-30T00:47:13.592678900Z" level=warning msg="cleaning up after shim disconnected" id=ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125 namespace=k8s.io Apr 30 00:47:13.592799 containerd[2021]: time="2025-04-30T00:47:13.592760416Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:13.616312 containerd[2021]: time="2025-04-30T00:47:13.616173616Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:47:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:47:13.899502 kubelet[3266]: I0430 00:47:13.899309 3266 scope.go:117] "RemoveContainer" containerID="ac2937d2dac3bfb1b98d149eec5b8641c215d34aa24074d2565a15cb5d43e125" Apr 30 00:47:13.902994 containerd[2021]: time="2025-04-30T00:47:13.902938170Z" level=info msg="CreateContainer within sandbox \"e00b6500558e8ebf52969a72b5c8c8d6bed94c225c248996d9a357266bc11261\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 00:47:13.930180 containerd[2021]: time="2025-04-30T00:47:13.930050142Z" level=info msg="CreateContainer within sandbox \"e00b6500558e8ebf52969a72b5c8c8d6bed94c225c248996d9a357266bc11261\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"fb5e336721c4c42e6f8192f633e785d76bb94f4de6cf68d56eb1e2a790d560c6\"" Apr 30 00:47:13.931687 containerd[2021]: time="2025-04-30T00:47:13.930857694Z" level=info msg="StartContainer for \"fb5e336721c4c42e6f8192f633e785d76bb94f4de6cf68d56eb1e2a790d560c6\"" Apr 30 00:47:13.996027 systemd[1]: Started cri-containerd-fb5e336721c4c42e6f8192f633e785d76bb94f4de6cf68d56eb1e2a790d560c6.scope - libcontainer container fb5e336721c4c42e6f8192f633e785d76bb94f4de6cf68d56eb1e2a790d560c6. Apr 30 00:47:14.061046 containerd[2021]: time="2025-04-30T00:47:14.060869391Z" level=info msg="StartContainer for \"fb5e336721c4c42e6f8192f633e785d76bb94f4de6cf68d56eb1e2a790d560c6\" returns successfully" Apr 30 00:47:14.582573 systemd[1]: run-containerd-runc-k8s.io-fb5e336721c4c42e6f8192f633e785d76bb94f4de6cf68d56eb1e2a790d560c6-runc.UOO5o8.mount: Deactivated successfully. Apr 30 00:47:15.208067 kubelet[3266]: E0430 00:47:15.207132 3266 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 30 00:47:25.207822 kubelet[3266]: E0430 00:47:25.207561 3266 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.0:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-0?timeout=10s\": context deadline exceeded"