Apr 30 00:43:27.231977 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 30 00:43:27.232024 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:43:27.232049 kernel: KASLR disabled due to lack of seed Apr 30 00:43:27.232067 kernel: efi: EFI v2.7 by EDK II Apr 30 00:43:27.232083 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Apr 30 00:43:27.232099 kernel: ACPI: Early table checksum verification disabled Apr 30 00:43:27.232117 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 30 00:43:27.232133 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 30 00:43:27.232149 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 30 00:43:27.232165 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Apr 30 00:43:27.232185 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 30 00:43:27.232202 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 30 00:43:27.232218 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 30 00:43:27.232235 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 30 00:43:27.232254 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 30 00:43:27.232277 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 30 00:43:27.232295 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 30 00:43:27.232312 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 30 00:43:27.232329 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 30 00:43:27.232346 kernel: printk: bootconsole [uart0] enabled Apr 30 00:43:27.232363 kernel: NUMA: Failed to initialise from firmware Apr 30 00:43:27.232380 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 00:43:27.232397 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 30 00:43:27.232414 kernel: Zone ranges: Apr 30 00:43:27.232431 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 30 00:43:27.232448 kernel: DMA32 empty Apr 30 00:43:27.232469 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 30 00:43:27.232487 kernel: Movable zone start for each node Apr 30 00:43:27.232504 kernel: Early memory node ranges Apr 30 00:43:27.232520 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 30 00:43:27.232537 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 30 00:43:27.232554 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 30 00:43:27.232571 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 30 00:43:27.232588 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 30 00:43:27.232605 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 30 00:43:27.232621 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 30 00:43:27.232638 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 30 00:43:27.232654 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 30 00:43:27.232676 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 30 00:43:27.232694 kernel: psci: probing for conduit method from ACPI. Apr 30 00:43:27.232718 kernel: psci: PSCIv1.0 detected in firmware. Apr 30 00:43:27.235153 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:43:27.235180 kernel: psci: Trusted OS migration not required Apr 30 00:43:27.235209 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:43:27.235228 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:43:27.235246 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:43:27.235265 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 30 00:43:27.235284 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:43:27.235301 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:43:27.235319 kernel: CPU features: detected: Spectre-v2 Apr 30 00:43:27.235337 kernel: CPU features: detected: Spectre-v3a Apr 30 00:43:27.235354 kernel: CPU features: detected: Spectre-BHB Apr 30 00:43:27.235372 kernel: CPU features: detected: ARM erratum 1742098 Apr 30 00:43:27.235390 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 30 00:43:27.235414 kernel: alternatives: applying boot alternatives Apr 30 00:43:27.235435 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:43:27.235454 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:43:27.235473 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:43:27.235491 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:43:27.235509 kernel: Fallback order for Node 0: 0 Apr 30 00:43:27.235527 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 30 00:43:27.235544 kernel: Policy zone: Normal Apr 30 00:43:27.235562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:43:27.235580 kernel: software IO TLB: area num 2. Apr 30 00:43:27.235597 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 30 00:43:27.235622 kernel: Memory: 3820152K/4030464K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210312K reserved, 0K cma-reserved) Apr 30 00:43:27.235641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 30 00:43:27.235659 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:43:27.235677 kernel: rcu: RCU event tracing is enabled. Apr 30 00:43:27.235696 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 30 00:43:27.235715 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:43:27.235782 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:43:27.235804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:43:27.235823 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 30 00:43:27.235840 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:43:27.235858 kernel: GICv3: 96 SPIs implemented Apr 30 00:43:27.235882 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:43:27.235900 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:43:27.235918 kernel: GICv3: GICv3 features: 16 PPIs Apr 30 00:43:27.235936 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 30 00:43:27.235954 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 30 00:43:27.235972 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:43:27.235991 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:43:27.236010 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 30 00:43:27.236028 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 30 00:43:27.236046 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 30 00:43:27.236064 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:43:27.236087 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 30 00:43:27.236112 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 30 00:43:27.236130 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 30 00:43:27.236148 kernel: Console: colour dummy device 80x25 Apr 30 00:43:27.236166 kernel: printk: console [tty1] enabled Apr 30 00:43:27.236184 kernel: ACPI: Core revision 20230628 Apr 30 00:43:27.236203 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 30 00:43:27.236221 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:43:27.236239 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:43:27.236258 kernel: landlock: Up and running. Apr 30 00:43:27.236282 kernel: SELinux: Initializing. Apr 30 00:43:27.236300 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:43:27.236318 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:43:27.236337 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:43:27.236356 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 30 00:43:27.236374 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:43:27.236398 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:43:27.236418 kernel: Platform MSI: ITS@0x10080000 domain created Apr 30 00:43:27.236436 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 30 00:43:27.236459 kernel: Remapping and enabling EFI services. Apr 30 00:43:27.236478 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:43:27.236496 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:43:27.236514 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 30 00:43:27.236533 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 30 00:43:27.236551 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 30 00:43:27.236569 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 00:43:27.236586 kernel: SMP: Total of 2 processors activated. Apr 30 00:43:27.236605 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:43:27.236632 kernel: CPU features: detected: 32-bit EL1 Support Apr 30 00:43:27.236650 kernel: CPU features: detected: CRC32 instructions Apr 30 00:43:27.236669 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:43:27.236698 kernel: alternatives: applying system-wide alternatives Apr 30 00:43:27.236721 kernel: devtmpfs: initialized Apr 30 00:43:27.237818 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:43:27.237842 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 30 00:43:27.237861 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:43:27.237880 kernel: SMBIOS 3.0.0 present. Apr 30 00:43:27.237899 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 30 00:43:27.237928 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:43:27.237948 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:43:27.237967 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:43:27.237986 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:43:27.238005 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:43:27.238024 kernel: audit: type=2000 audit(0.297:1): state=initialized audit_enabled=0 res=1 Apr 30 00:43:27.238042 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:43:27.238065 kernel: cpuidle: using governor menu Apr 30 00:43:27.238084 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:43:27.238103 kernel: ASID allocator initialised with 65536 entries Apr 30 00:43:27.238121 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:43:27.238140 kernel: Serial: AMBA PL011 UART driver Apr 30 00:43:27.238158 kernel: Modules: 17504 pages in range for non-PLT usage Apr 30 00:43:27.238177 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:43:27.238196 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:43:27.238214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:43:27.238237 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:43:27.238257 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:43:27.238277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:43:27.238296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:43:27.238317 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:43:27.238336 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:43:27.238355 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:43:27.238374 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:43:27.238392 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:43:27.238415 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:43:27.238434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:43:27.238452 kernel: ACPI: Interpreter enabled Apr 30 00:43:27.238471 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:43:27.238489 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:43:27.238508 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Apr 30 00:43:27.238844 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:43:27.239092 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:43:27.239316 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:43:27.239534 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Apr 30 00:43:27.242956 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Apr 30 00:43:27.243019 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 30 00:43:27.243039 kernel: acpiphp: Slot [1] registered Apr 30 00:43:27.243059 kernel: acpiphp: Slot [2] registered Apr 30 00:43:27.243078 kernel: acpiphp: Slot [3] registered Apr 30 00:43:27.243097 kernel: acpiphp: Slot [4] registered Apr 30 00:43:27.243125 kernel: acpiphp: Slot [5] registered Apr 30 00:43:27.243144 kernel: acpiphp: Slot [6] registered Apr 30 00:43:27.243162 kernel: acpiphp: Slot [7] registered Apr 30 00:43:27.243180 kernel: acpiphp: Slot [8] registered Apr 30 00:43:27.243199 kernel: acpiphp: Slot [9] registered Apr 30 00:43:27.243217 kernel: acpiphp: Slot [10] registered Apr 30 00:43:27.243236 kernel: acpiphp: Slot [11] registered Apr 30 00:43:27.243255 kernel: acpiphp: Slot [12] registered Apr 30 00:43:27.243274 kernel: acpiphp: Slot [13] registered Apr 30 00:43:27.243292 kernel: acpiphp: Slot [14] registered Apr 30 00:43:27.243315 kernel: acpiphp: Slot [15] registered Apr 30 00:43:27.243334 kernel: acpiphp: Slot [16] registered Apr 30 00:43:27.243352 kernel: acpiphp: Slot [17] registered Apr 30 00:43:27.243371 kernel: acpiphp: Slot [18] registered Apr 30 00:43:27.243389 kernel: acpiphp: Slot [19] registered Apr 30 00:43:27.243407 kernel: acpiphp: Slot [20] registered Apr 30 00:43:27.243426 kernel: acpiphp: Slot [21] registered Apr 30 00:43:27.243444 kernel: acpiphp: Slot [22] registered Apr 30 00:43:27.243463 kernel: acpiphp: Slot [23] registered Apr 30 00:43:27.243485 kernel: acpiphp: Slot [24] registered Apr 30 00:43:27.243505 kernel: acpiphp: Slot [25] registered Apr 30 00:43:27.243523 kernel: acpiphp: Slot [26] registered Apr 30 00:43:27.243541 kernel: acpiphp: Slot [27] registered Apr 30 00:43:27.243560 kernel: acpiphp: Slot [28] registered Apr 30 00:43:27.243578 kernel: acpiphp: Slot [29] registered Apr 30 00:43:27.243596 kernel: acpiphp: Slot [30] registered Apr 30 00:43:27.243614 kernel: acpiphp: Slot [31] registered Apr 30 00:43:27.243633 kernel: PCI host bridge to bus 0000:00 Apr 30 00:43:27.244003 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 30 00:43:27.244210 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:43:27.244402 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 30 00:43:27.244589 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Apr 30 00:43:27.245323 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 30 00:43:27.245579 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 30 00:43:27.245840 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 30 00:43:27.246103 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 30 00:43:27.246327 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 30 00:43:27.246547 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 00:43:27.246914 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 30 00:43:27.247164 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 30 00:43:27.247394 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 30 00:43:27.247620 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 30 00:43:27.247872 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 30 00:43:27.248092 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Apr 30 00:43:27.248308 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Apr 30 00:43:27.248527 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Apr 30 00:43:27.248763 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Apr 30 00:43:27.248999 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Apr 30 00:43:27.249211 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 30 00:43:27.249407 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:43:27.249600 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 30 00:43:27.249626 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:43:27.249646 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:43:27.249665 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:43:27.249683 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:43:27.249702 kernel: iommu: Default domain type: Translated Apr 30 00:43:27.249721 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:43:27.249785 kernel: efivars: Registered efivars operations Apr 30 00:43:27.249804 kernel: vgaarb: loaded Apr 30 00:43:27.249823 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:43:27.249842 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:43:27.249860 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:43:27.249879 kernel: pnp: PnP ACPI init Apr 30 00:43:27.250125 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 30 00:43:27.250154 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:43:27.250179 kernel: NET: Registered PF_INET protocol family Apr 30 00:43:27.250199 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:43:27.250218 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:43:27.250237 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:43:27.250257 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:43:27.250276 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:43:27.250296 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:43:27.250314 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:43:27.250333 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:43:27.250356 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:43:27.250375 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:43:27.250393 kernel: kvm [1]: HYP mode not available Apr 30 00:43:27.250411 kernel: Initialise system trusted keyrings Apr 30 00:43:27.250430 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:43:27.250449 kernel: Key type asymmetric registered Apr 30 00:43:27.250467 kernel: Asymmetric key parser 'x509' registered Apr 30 00:43:27.250486 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:43:27.250505 kernel: io scheduler mq-deadline registered Apr 30 00:43:27.250528 kernel: io scheduler kyber registered Apr 30 00:43:27.250547 kernel: io scheduler bfq registered Apr 30 00:43:27.250832 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 30 00:43:27.250866 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:43:27.250886 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:43:27.250905 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 30 00:43:27.250924 kernel: ACPI: button: Sleep Button [SLPB] Apr 30 00:43:27.250943 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:43:27.250988 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 30 00:43:27.251237 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 30 00:43:27.251267 kernel: printk: console [ttyS0] disabled Apr 30 00:43:27.251287 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 30 00:43:27.251306 kernel: printk: console [ttyS0] enabled Apr 30 00:43:27.251325 kernel: printk: bootconsole [uart0] disabled Apr 30 00:43:27.251344 kernel: thunder_xcv, ver 1.0 Apr 30 00:43:27.251362 kernel: thunder_bgx, ver 1.0 Apr 30 00:43:27.251381 kernel: nicpf, ver 1.0 Apr 30 00:43:27.251407 kernel: nicvf, ver 1.0 Apr 30 00:43:27.251654 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:43:27.251942 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:43:26 UTC (1745973806) Apr 30 00:43:27.251976 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:43:27.251995 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 30 00:43:27.252014 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:43:27.252033 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:43:27.252052 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:43:27.252078 kernel: Segment Routing with IPv6 Apr 30 00:43:27.252097 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:43:27.252116 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:43:27.252134 kernel: Key type dns_resolver registered Apr 30 00:43:27.252153 kernel: registered taskstats version 1 Apr 30 00:43:27.252171 kernel: Loading compiled-in X.509 certificates Apr 30 00:43:27.252190 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:43:27.252208 kernel: Key type .fscrypt registered Apr 30 00:43:27.252226 kernel: Key type fscrypt-provisioning registered Apr 30 00:43:27.252248 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:43:27.252268 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:43:27.252286 kernel: ima: No architecture policies found Apr 30 00:43:27.252304 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:43:27.252323 kernel: clk: Disabling unused clocks Apr 30 00:43:27.252341 kernel: Freeing unused kernel memory: 39424K Apr 30 00:43:27.252360 kernel: Run /init as init process Apr 30 00:43:27.252378 kernel: with arguments: Apr 30 00:43:27.252396 kernel: /init Apr 30 00:43:27.252415 kernel: with environment: Apr 30 00:43:27.252437 kernel: HOME=/ Apr 30 00:43:27.252456 kernel: TERM=linux Apr 30 00:43:27.252474 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:43:27.252496 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:43:27.252520 systemd[1]: Detected virtualization amazon. Apr 30 00:43:27.252540 systemd[1]: Detected architecture arm64. Apr 30 00:43:27.252560 systemd[1]: Running in initrd. Apr 30 00:43:27.252584 systemd[1]: No hostname configured, using default hostname. Apr 30 00:43:27.252604 systemd[1]: Hostname set to . Apr 30 00:43:27.252624 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:43:27.252645 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:43:27.252665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:27.252685 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:27.252707 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:43:27.252749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:43:27.252782 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:43:27.252805 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:43:27.252829 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:43:27.252850 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:43:27.252870 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:27.252890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:27.252910 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:43:27.252936 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:43:27.252956 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:43:27.252976 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:43:27.252996 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:43:27.253016 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:43:27.253037 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:43:27.253057 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:43:27.253077 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:27.253097 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:27.253123 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:27.253143 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:43:27.253163 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:43:27.253183 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:43:27.253203 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:43:27.253223 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:43:27.253243 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:43:27.253264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:43:27.253289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:27.253310 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:43:27.253330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:27.253396 systemd-journald[250]: Collecting audit messages is disabled. Apr 30 00:43:27.253446 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:43:27.253469 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:43:27.253490 systemd-journald[250]: Journal started Apr 30 00:43:27.253532 systemd-journald[250]: Runtime Journal (/run/log/journal/ec21440092a0667bab19bd443c2889d8) is 8.0M, max 75.3M, 67.3M free. Apr 30 00:43:27.226637 systemd-modules-load[251]: Inserted module 'overlay' Apr 30 00:43:27.271037 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:27.271117 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:43:27.271476 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:43:27.273963 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:43:27.281938 kernel: Bridge firewalling registered Apr 30 00:43:27.282040 systemd-modules-load[251]: Inserted module 'br_netfilter' Apr 30 00:43:27.285755 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:27.297080 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:27.306603 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:43:27.313675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:43:27.321260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:43:27.359803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:27.364833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:27.378149 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:43:27.390812 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:27.396904 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:27.411268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:43:27.418617 dracut-cmdline[285]: dracut-dracut-053 Apr 30 00:43:27.425080 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:43:27.493716 systemd-resolved[292]: Positive Trust Anchors: Apr 30 00:43:27.493796 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:43:27.493860 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:43:27.596762 kernel: SCSI subsystem initialized Apr 30 00:43:27.603764 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:43:27.617798 kernel: iscsi: registered transport (tcp) Apr 30 00:43:27.640063 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:43:27.640150 kernel: QLogic iSCSI HBA Driver Apr 30 00:43:27.736774 kernel: random: crng init done Apr 30 00:43:27.737071 systemd-resolved[292]: Defaulting to hostname 'linux'. Apr 30 00:43:27.739709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:43:27.744149 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:27.784143 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:43:27.795033 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:43:27.847102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:43:27.847179 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:43:27.847207 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:43:27.915792 kernel: raid6: neonx8 gen() 6706 MB/s Apr 30 00:43:27.932785 kernel: raid6: neonx4 gen() 6522 MB/s Apr 30 00:43:27.949776 kernel: raid6: neonx2 gen() 5433 MB/s Apr 30 00:43:27.966783 kernel: raid6: neonx1 gen() 3949 MB/s Apr 30 00:43:27.983775 kernel: raid6: int64x8 gen() 3788 MB/s Apr 30 00:43:28.000780 kernel: raid6: int64x4 gen() 3716 MB/s Apr 30 00:43:28.017773 kernel: raid6: int64x2 gen() 3581 MB/s Apr 30 00:43:28.035745 kernel: raid6: int64x1 gen() 2770 MB/s Apr 30 00:43:28.035809 kernel: raid6: using algorithm neonx8 gen() 6706 MB/s Apr 30 00:43:28.054780 kernel: raid6: .... xor() 4880 MB/s, rmw enabled Apr 30 00:43:28.054838 kernel: raid6: using neon recovery algorithm Apr 30 00:43:28.062778 kernel: xor: measuring software checksum speed Apr 30 00:43:28.062854 kernel: 8regs : 10144 MB/sec Apr 30 00:43:28.065086 kernel: 32regs : 11987 MB/sec Apr 30 00:43:28.066387 kernel: arm64_neon : 9589 MB/sec Apr 30 00:43:28.066420 kernel: xor: using function: 32regs (11987 MB/sec) Apr 30 00:43:28.153780 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:43:28.177597 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:43:28.189065 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:28.238159 systemd-udevd[471]: Using default interface naming scheme 'v255'. Apr 30 00:43:28.247917 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:28.272393 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:43:28.317090 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Apr 30 00:43:28.382215 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:43:28.392057 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:43:28.522658 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:28.548433 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:43:28.608463 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:43:28.614657 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:43:28.620342 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:28.623368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:43:28.642138 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:43:28.681138 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:43:28.733537 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:43:28.733613 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 30 00:43:28.765690 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 30 00:43:28.766476 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 30 00:43:28.766836 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c0:7d:29:2e:31 Apr 30 00:43:28.762309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:43:28.762602 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:28.770177 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:28.771901 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:43:28.780268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:43:28.780525 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:28.804804 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:28.816621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:28.828428 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 30 00:43:28.828497 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 30 00:43:28.838795 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 30 00:43:28.849770 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:43:28.849861 kernel: GPT:9289727 != 16777215 Apr 30 00:43:28.849889 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:43:28.849915 kernel: GPT:9289727 != 16777215 Apr 30 00:43:28.849940 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:43:28.849967 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:28.858715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:28.871145 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:43:28.913683 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:28.949611 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (515) Apr 30 00:43:29.010783 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (523) Apr 30 00:43:29.092273 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 30 00:43:29.110523 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 30 00:43:29.128473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 00:43:29.147216 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 30 00:43:29.163483 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 30 00:43:29.187183 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:43:29.200840 disk-uuid[660]: Primary Header is updated. Apr 30 00:43:29.200840 disk-uuid[660]: Secondary Entries is updated. Apr 30 00:43:29.200840 disk-uuid[660]: Secondary Header is updated. Apr 30 00:43:29.211812 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:29.222774 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:29.234788 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:30.233808 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 30 00:43:30.237190 disk-uuid[661]: The operation has completed successfully. Apr 30 00:43:30.440410 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:43:30.442517 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:43:30.503141 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:43:30.522074 sh[1005]: Success Apr 30 00:43:30.547781 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:43:30.668230 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:43:30.682531 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:43:30.691849 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:43:30.736871 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:43:30.736948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:30.736989 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:43:30.738353 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:43:30.739580 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:43:30.843775 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 30 00:43:30.867149 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:43:30.871543 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:43:30.885883 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:43:30.893072 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:43:30.921224 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:30.921304 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:30.922634 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:30.932770 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:30.953721 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:43:30.957109 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:30.969352 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:43:30.982223 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:43:31.125876 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:43:31.144151 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:43:31.200598 systemd-networkd[1199]: lo: Link UP Apr 30 00:43:31.200629 systemd-networkd[1199]: lo: Gained carrier Apr 30 00:43:31.206407 systemd-networkd[1199]: Enumeration completed Apr 30 00:43:31.206599 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:43:31.210517 systemd[1]: Reached target network.target - Network. Apr 30 00:43:31.214877 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:31.214899 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:43:31.225819 systemd-networkd[1199]: eth0: Link UP Apr 30 00:43:31.225842 systemd-networkd[1199]: eth0: Gained carrier Apr 30 00:43:31.225863 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:31.245891 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.27.157/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 00:43:31.440335 ignition[1102]: Ignition 2.19.0 Apr 30 00:43:31.440365 ignition[1102]: Stage: fetch-offline Apr 30 00:43:31.442138 ignition[1102]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:31.442195 ignition[1102]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:31.444094 ignition[1102]: Ignition finished successfully Apr 30 00:43:31.451347 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:43:31.471354 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 00:43:31.499392 ignition[1208]: Ignition 2.19.0 Apr 30 00:43:31.499414 ignition[1208]: Stage: fetch Apr 30 00:43:31.500108 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:31.500134 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:31.500617 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:31.524538 ignition[1208]: PUT result: OK Apr 30 00:43:31.542844 ignition[1208]: parsed url from cmdline: "" Apr 30 00:43:31.542861 ignition[1208]: no config URL provided Apr 30 00:43:31.542878 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:43:31.542906 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:43:31.542959 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:31.547422 ignition[1208]: PUT result: OK Apr 30 00:43:31.547606 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 30 00:43:31.552180 ignition[1208]: GET result: OK Apr 30 00:43:31.552434 ignition[1208]: parsing config with SHA512: 6a3ef1750ac227e946396b04bf71c901839573df4c3ff1dd9f874e166c0bfc93a4faf39b0d9c09ee82be9e484f9664c859bec4999ea0f9d85369dc12b8698e77 Apr 30 00:43:31.565839 unknown[1208]: fetched base config from "system" Apr 30 00:43:31.565890 unknown[1208]: fetched base config from "system" Apr 30 00:43:31.565906 unknown[1208]: fetched user config from "aws" Apr 30 00:43:31.573938 ignition[1208]: fetch: fetch complete Apr 30 00:43:31.573953 ignition[1208]: fetch: fetch passed Apr 30 00:43:31.574143 ignition[1208]: Ignition finished successfully Apr 30 00:43:31.579641 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 00:43:31.594059 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:43:31.636173 ignition[1215]: Ignition 2.19.0 Apr 30 00:43:31.636211 ignition[1215]: Stage: kargs Apr 30 00:43:31.637072 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:31.637106 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:31.637282 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:31.640807 ignition[1215]: PUT result: OK Apr 30 00:43:31.651952 ignition[1215]: kargs: kargs passed Apr 30 00:43:31.652303 ignition[1215]: Ignition finished successfully Apr 30 00:43:31.657872 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:43:31.677662 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:43:31.703463 ignition[1221]: Ignition 2.19.0 Apr 30 00:43:31.703489 ignition[1221]: Stage: disks Apr 30 00:43:31.704362 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:31.704392 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:31.704927 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:31.707698 ignition[1221]: PUT result: OK Apr 30 00:43:31.720529 ignition[1221]: disks: disks passed Apr 30 00:43:31.720822 ignition[1221]: Ignition finished successfully Apr 30 00:43:31.728905 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:43:31.734060 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:43:31.736830 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:43:31.739890 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:43:31.741995 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:43:31.744106 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:43:31.757310 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:43:31.814051 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:43:31.821889 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:43:31.833968 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:43:31.940209 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:43:31.941444 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:43:31.945428 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:43:31.959003 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:43:31.966363 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:43:31.971268 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:43:31.971397 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:43:31.971462 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:43:31.999023 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:43:32.008809 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1248) Apr 30 00:43:32.011180 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:43:32.022307 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:32.022354 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:32.022381 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:32.029118 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:32.033662 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:43:32.365513 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:43:32.386465 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:43:32.397800 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:43:32.409583 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:43:32.755357 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:43:32.770979 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:43:32.776149 systemd-networkd[1199]: eth0: Gained IPv6LL Apr 30 00:43:32.776905 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:43:32.798432 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:43:32.804862 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:32.848887 ignition[1361]: INFO : Ignition 2.19.0 Apr 30 00:43:32.848887 ignition[1361]: INFO : Stage: mount Apr 30 00:43:32.852896 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:32.852896 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:32.850656 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:43:32.862917 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:32.866265 ignition[1361]: INFO : PUT result: OK Apr 30 00:43:32.871624 ignition[1361]: INFO : mount: mount passed Apr 30 00:43:32.873406 ignition[1361]: INFO : Ignition finished successfully Apr 30 00:43:32.877590 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:43:32.888029 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:43:32.959040 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:43:32.982794 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1372) Apr 30 00:43:32.986674 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:43:32.986771 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:43:32.986802 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 30 00:43:32.992771 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 30 00:43:32.996516 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:43:33.042261 ignition[1389]: INFO : Ignition 2.19.0 Apr 30 00:43:33.042261 ignition[1389]: INFO : Stage: files Apr 30 00:43:33.046610 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:33.046610 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:33.046610 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:33.046610 ignition[1389]: INFO : PUT result: OK Apr 30 00:43:33.057903 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:43:33.061048 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:43:33.061048 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:43:33.103724 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:43:33.106634 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:43:33.109831 unknown[1389]: wrote ssh authorized keys file for user: core Apr 30 00:43:33.112109 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:43:33.123912 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:43:33.127446 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:43:33.131031 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:43:33.136548 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:43:35.085187 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:43:38.792416 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:43:38.796936 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:43:38.796936 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 00:43:39.296915 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 30 00:43:39.431169 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:43:39.434940 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:43:39.479936 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:43:39.813168 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 30 00:43:40.350443 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:43:40.354465 ignition[1389]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 30 00:43:40.371853 ignition[1389]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:43:40.376202 ignition[1389]: INFO : files: files passed Apr 30 00:43:40.376202 ignition[1389]: INFO : Ignition finished successfully Apr 30 00:43:40.387253 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:43:40.437651 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:43:40.445911 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:43:40.458283 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:43:40.458487 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:43:40.481688 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:40.481688 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:40.489081 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:43:40.494438 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:43:40.497203 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:43:40.513157 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:43:40.569117 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:43:40.569305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:43:40.572665 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:43:40.576258 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:43:40.578833 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:43:40.593615 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:43:40.622806 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:43:40.638000 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:43:40.663127 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:40.666050 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:40.673487 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:43:40.675797 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:43:40.676129 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:43:40.684849 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:43:40.687677 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:43:40.692418 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:43:40.695316 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:43:40.699275 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:43:40.705134 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:43:40.707782 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:43:40.714065 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:43:40.716697 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:43:40.722097 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:43:40.723868 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:43:40.724102 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:43:40.731819 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:40.734013 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:40.737372 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:43:40.740325 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:40.742181 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:43:40.742404 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:43:40.743100 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:43:40.743318 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:43:40.744144 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:43:40.744444 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:43:40.773409 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:43:40.777203 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:43:40.777485 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:40.785136 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:43:40.806542 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:43:40.807886 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:40.819568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:43:40.822212 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:43:40.824899 ignition[1441]: INFO : Ignition 2.19.0 Apr 30 00:43:40.824899 ignition[1441]: INFO : Stage: umount Apr 30 00:43:40.824899 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:43:40.824899 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 30 00:43:40.847406 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 30 00:43:40.847406 ignition[1441]: INFO : PUT result: OK Apr 30 00:43:40.847406 ignition[1441]: INFO : umount: umount passed Apr 30 00:43:40.847406 ignition[1441]: INFO : Ignition finished successfully Apr 30 00:43:40.851227 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:43:40.861138 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:43:40.868673 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:43:40.874374 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:43:40.874792 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:43:40.879566 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:43:40.879827 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:43:40.888523 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:43:40.888625 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:43:40.890983 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:43:40.891072 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:43:40.893982 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 00:43:40.894175 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 00:43:40.894521 systemd[1]: Stopped target network.target - Network. Apr 30 00:43:40.897312 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:43:40.897394 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:43:40.897653 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:43:40.898216 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:43:40.915774 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:40.920142 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:43:40.929083 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:43:40.931020 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:43:40.931102 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:43:40.933022 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:43:40.933093 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:43:40.935072 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:43:40.935162 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:43:40.937090 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:43:40.937172 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:43:40.939166 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:43:40.939243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:43:40.941528 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:43:40.943834 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:43:40.957340 systemd-networkd[1199]: eth0: DHCPv6 lease lost Apr 30 00:43:40.960577 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:43:40.964683 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:43:40.984298 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:43:40.985581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:43:40.991085 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:43:40.991813 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:41.008681 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:43:41.017480 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:43:41.017603 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:43:41.021561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:43:41.021653 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:41.032644 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:43:41.032769 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:41.035206 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:43:41.035286 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:41.048297 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:41.059473 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:43:41.059949 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:41.071625 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:43:41.073602 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:41.078213 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:43:41.078297 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:41.080363 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:43:41.080452 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:43:41.083064 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:43:41.083144 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:43:41.098150 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:43:41.098268 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:43:41.112126 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:43:41.117900 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:43:41.118022 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:41.120660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:43:41.120771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:41.124086 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:43:41.124705 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:43:41.148899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:43:41.149604 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:43:41.156643 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:43:41.170515 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:43:41.197534 systemd[1]: Switching root. Apr 30 00:43:41.235366 systemd-journald[250]: Journal stopped Apr 30 00:43:43.818819 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Apr 30 00:43:43.818984 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:43:43.819029 kernel: SELinux: policy capability open_perms=1 Apr 30 00:43:43.819061 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:43:43.819101 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:43:43.819132 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:43:43.819164 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:43:43.819194 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:43:43.819224 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:43:43.819261 kernel: audit: type=1403 audit(1745973822.022:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:43:43.819304 systemd[1]: Successfully loaded SELinux policy in 78.675ms. Apr 30 00:43:43.819352 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.534ms. Apr 30 00:43:43.819395 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:43:43.819427 systemd[1]: Detected virtualization amazon. Apr 30 00:43:43.819461 systemd[1]: Detected architecture arm64. Apr 30 00:43:43.819493 systemd[1]: Detected first boot. Apr 30 00:43:43.819525 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:43:43.819558 zram_generator::config[1501]: No configuration found. Apr 30 00:43:43.819597 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:43:43.819630 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:43:43.819661 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 30 00:43:43.819694 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:43:43.819750 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:43:43.819789 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:43:43.819824 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:43:43.819857 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:43:43.819897 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:43:43.819930 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:43:43.819963 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:43:43.820006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:43:43.820037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:43:43.820071 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:43:43.820103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:43:43.820138 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:43:43.820174 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:43:43.820203 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 00:43:43.820235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:43:43.820266 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:43:43.820300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:43:43.820332 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:43:43.820364 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:43:43.820396 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:43:43.820426 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:43:43.820461 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:43:43.820495 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:43:43.820524 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:43:43.820555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:43:43.820584 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:43:43.820616 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:43:43.820647 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:43:43.820677 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:43:43.820706 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:43:43.821894 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:43:43.821942 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:43:43.821976 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:43:43.822006 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:43:43.822040 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:43:43.822083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:43.822114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:43:43.822147 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:43:43.822177 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:43:43.822214 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:43:43.822247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:43:43.822279 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:43:43.822311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:43:43.822342 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:43:43.822374 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:43:43.822408 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:43:43.822440 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:43:43.822474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:43:43.822504 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:43:43.822534 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:43:43.822564 kernel: loop: module loaded Apr 30 00:43:43.822593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:43:43.822625 kernel: ACPI: bus type drm_connector registered Apr 30 00:43:43.822713 systemd-journald[1598]: Collecting audit messages is disabled. Apr 30 00:43:43.822804 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:43:43.822842 kernel: fuse: init (API version 7.39) Apr 30 00:43:43.822870 systemd-journald[1598]: Journal started Apr 30 00:43:43.822936 systemd-journald[1598]: Runtime Journal (/run/log/journal/ec21440092a0667bab19bd443c2889d8) is 8.0M, max 75.3M, 67.3M free. Apr 30 00:43:43.836071 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:43:43.845902 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:43:43.848447 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:43:43.851393 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:43:43.855101 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:43:43.857547 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:43:43.860157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:43:43.864707 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:43:43.865526 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:43:43.868603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:43:43.869013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:43:43.872908 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:43:43.873277 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:43:43.878630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:43:43.879167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:43:43.883076 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:43:43.883507 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:43:43.886607 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:43:43.889085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:43:43.892675 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:43:43.898572 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:43:43.902560 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:43:43.940216 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:43:43.951935 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:43:43.965955 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:43:43.968931 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:43:43.986240 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:43:43.996972 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:43:43.999631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:43:44.012241 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:43:44.016996 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:43:44.021630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:43:44.031000 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:43:44.059216 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:43:44.062876 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:43:44.075082 systemd-journald[1598]: Time spent on flushing to /var/log/journal/ec21440092a0667bab19bd443c2889d8 is 55.574ms for 897 entries. Apr 30 00:43:44.075082 systemd-journald[1598]: System Journal (/var/log/journal/ec21440092a0667bab19bd443c2889d8) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:43:44.138915 systemd-journald[1598]: Received client request to flush runtime journal. Apr 30 00:43:44.066219 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:43:44.108586 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:43:44.111527 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:43:44.148511 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:43:44.175195 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Apr 30 00:43:44.175233 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Apr 30 00:43:44.194920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:43:44.215077 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:43:44.219690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:43:44.227215 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:43:44.238089 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:43:44.292756 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:43:44.308480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:43:44.311717 udevadm[1672]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:43:44.341400 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Apr 30 00:43:44.342071 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Apr 30 00:43:44.352684 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:43:45.079169 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:43:45.089100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:43:45.156387 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Apr 30 00:43:45.192599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:43:45.202978 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:43:45.235036 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:43:45.337933 (udev-worker)[1702]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:43:45.340642 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 30 00:43:45.434435 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:43:45.607835 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1684) Apr 30 00:43:45.625076 systemd-networkd[1686]: lo: Link UP Apr 30 00:43:45.625197 systemd-networkd[1686]: lo: Gained carrier Apr 30 00:43:45.629284 systemd-networkd[1686]: Enumeration completed Apr 30 00:43:45.630217 systemd-networkd[1686]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:45.630225 systemd-networkd[1686]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:43:45.633273 systemd-networkd[1686]: eth0: Link UP Apr 30 00:43:45.633643 systemd-networkd[1686]: eth0: Gained carrier Apr 30 00:43:45.633676 systemd-networkd[1686]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:43:45.644866 systemd-networkd[1686]: eth0: DHCPv4 address 172.31.27.157/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 30 00:43:45.647242 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:43:45.651008 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:43:45.677301 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:43:45.849013 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:43:45.879991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 30 00:43:45.883315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:43:45.893057 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:43:45.923710 lvm[1811]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:43:45.961947 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:43:45.965157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:43:45.979061 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:43:45.989369 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:43:46.027531 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:43:46.031428 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:43:46.034967 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:43:46.035185 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:43:46.037882 systemd[1]: Reached target machines.target - Containers. Apr 30 00:43:46.041957 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:43:46.050061 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:43:46.056046 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:43:46.060043 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:46.074028 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:43:46.079544 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:43:46.094058 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:43:46.106787 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:43:46.128619 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:43:46.137847 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:43:46.146916 kernel: loop0: detected capacity change from 0 to 114328 Apr 30 00:43:46.144155 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:43:46.258777 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:43:46.292943 kernel: loop1: detected capacity change from 0 to 194096 Apr 30 00:43:46.363775 kernel: loop2: detected capacity change from 0 to 114432 Apr 30 00:43:46.473166 kernel: loop3: detected capacity change from 0 to 52536 Apr 30 00:43:46.522773 kernel: loop4: detected capacity change from 0 to 114328 Apr 30 00:43:46.541107 kernel: loop5: detected capacity change from 0 to 194096 Apr 30 00:43:46.569789 kernel: loop6: detected capacity change from 0 to 114432 Apr 30 00:43:46.581787 kernel: loop7: detected capacity change from 0 to 52536 Apr 30 00:43:46.600584 (sd-merge)[1835]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 30 00:43:46.601596 (sd-merge)[1835]: Merged extensions into '/usr'. Apr 30 00:43:46.608337 systemd[1]: Reloading requested from client PID 1822 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:43:46.608364 systemd[1]: Reloading... Apr 30 00:43:46.724771 zram_generator::config[1863]: No configuration found. Apr 30 00:43:46.727893 systemd-networkd[1686]: eth0: Gained IPv6LL Apr 30 00:43:47.030772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:43:47.187948 systemd[1]: Reloading finished in 578 ms. Apr 30 00:43:47.215560 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:43:47.218971 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:43:47.237016 systemd[1]: Starting ensure-sysext.service... Apr 30 00:43:47.252085 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:43:47.267079 systemd[1]: Reloading requested from client PID 1922 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:43:47.267107 systemd[1]: Reloading... Apr 30 00:43:47.293262 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:43:47.293973 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:43:47.295818 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:43:47.296363 systemd-tmpfiles[1923]: ACLs are not supported, ignoring. Apr 30 00:43:47.296516 systemd-tmpfiles[1923]: ACLs are not supported, ignoring. Apr 30 00:43:47.303354 systemd-tmpfiles[1923]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:43:47.303383 systemd-tmpfiles[1923]: Skipping /boot Apr 30 00:43:47.334522 systemd-tmpfiles[1923]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:43:47.334542 systemd-tmpfiles[1923]: Skipping /boot Apr 30 00:43:47.454887 zram_generator::config[1949]: No configuration found. Apr 30 00:43:47.554671 ldconfig[1818]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:43:47.712481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:43:47.869507 systemd[1]: Reloading finished in 601 ms. Apr 30 00:43:47.895373 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:43:47.909783 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:43:47.925019 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:43:47.935112 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:43:47.947031 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:43:47.957054 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:43:47.974104 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:43:47.996009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:48.003936 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:43:48.029351 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:43:48.037298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:43:48.041570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:48.064396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:43:48.068881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:43:48.077526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:43:48.077920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:43:48.091163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:48.101255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:43:48.110723 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:43:48.118901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:48.120701 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:43:48.144981 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:43:48.148623 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:43:48.149024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:43:48.163430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:43:48.164808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:43:48.186931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:43:48.191963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:43:48.221517 augenrules[2055]: No rules Apr 30 00:43:48.221295 systemd[1]: Finished ensure-sysext.service. Apr 30 00:43:48.224076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:43:48.234149 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:43:48.247062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:43:48.259042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:43:48.261289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:43:48.261400 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:43:48.270055 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:43:48.273146 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:43:48.306584 systemd-resolved[2019]: Positive Trust Anchors: Apr 30 00:43:48.306616 systemd-resolved[2019]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:43:48.306680 systemd-resolved[2019]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:43:48.307331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:43:48.307754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:43:48.314341 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:43:48.318792 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:43:48.319220 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:43:48.322226 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:43:48.322604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:43:48.340212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:43:48.340392 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:43:48.340485 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:43:48.343020 systemd-resolved[2019]: Defaulting to hostname 'linux'. Apr 30 00:43:48.347808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:43:48.350275 systemd[1]: Reached target network.target - Network. Apr 30 00:43:48.352318 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:43:48.354460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:43:48.365358 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:43:48.368066 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:43:48.375423 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:43:48.378125 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:43:48.381031 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:43:48.383365 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:43:48.385967 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:43:48.388495 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:43:48.388561 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:43:48.390835 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:43:48.394590 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:43:48.399902 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:43:48.404904 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:43:48.410342 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:43:48.414936 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:43:48.417050 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:43:48.421440 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:43:48.421524 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:43:48.421569 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:43:48.431902 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:43:48.442127 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 00:43:48.455078 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:43:48.461462 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:43:48.468046 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:43:48.471065 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:43:48.481340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:43:48.502080 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:43:48.508153 systemd[1]: Started ntpd.service - Network Time Service. Apr 30 00:43:48.530928 jq[2085]: false Apr 30 00:43:48.532022 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:43:48.548366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:43:48.584221 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 30 00:43:48.594579 dbus-daemon[2084]: [system] SELinux support is enabled Apr 30 00:43:48.607925 dbus-daemon[2084]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1686 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 00:43:48.631146 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:43:48.643514 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:43:48.661717 extend-filesystems[2086]: Found loop4 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found loop5 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found loop6 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found loop7 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found nvme0n1 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found nvme0n1p1 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found nvme0n1p2 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found nvme0n1p3 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found usr Apr 30 00:43:48.661717 extend-filesystems[2086]: Found nvme0n1p4 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found nvme0n1p6 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found nvme0n1p7 Apr 30 00:43:48.661717 extend-filesystems[2086]: Found nvme0n1p9 Apr 30 00:43:48.661717 extend-filesystems[2086]: Checking size of /dev/nvme0n1p9 Apr 30 00:43:48.671996 ntpd[2089]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:34 UTC 2025 (1): Starting Apr 30 00:43:48.731330 coreos-metadata[2082]: Apr 30 00:43:48.719 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 00:43:48.731330 coreos-metadata[2082]: Apr 30 00:43:48.724 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 30 00:43:48.731330 coreos-metadata[2082]: Apr 30 00:43:48.724 INFO Fetch successful Apr 30 00:43:48.731330 coreos-metadata[2082]: Apr 30 00:43:48.724 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 30 00:43:48.731330 coreos-metadata[2082]: Apr 30 00:43:48.727 INFO Fetch successful Apr 30 00:43:48.731330 coreos-metadata[2082]: Apr 30 00:43:48.727 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 22:12:34 UTC 2025 (1): Starting Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: ---------------------------------------------------- Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: ntp-4 is maintained by Network Time Foundation, Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: corporation. Support and training for ntp-4 are Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: available at https://www.nwtime.org/support Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: ---------------------------------------------------- Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: proto: precision = 0.096 usec (-23) Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: basedate set to 2025-04-17 Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: gps base set to 2025-04-20 (week 2363) Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Listen normally on 3 eth0 172.31.27.157:123 Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Listen normally on 4 lo [::1]:123 Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Listen normally on 5 eth0 [fe80::4c0:7dff:fe29:2e31%2]:123 Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: Listening on routing socket on fd #22 for interface updates Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:48.736945 ntpd[2089]: 30 Apr 00:43:48 ntpd[2089]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:48.767125 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 30 00:43:48.691087 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:43:48.672050 ntpd[2089]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 30 00:43:48.784083 extend-filesystems[2086]: Resized partition /dev/nvme0n1p9 Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.736 INFO Fetch successful Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.736 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.743 INFO Fetch successful Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.743 INFO Fetch failed with 404: resource not found Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.743 INFO Fetch successful Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.744 INFO Fetch successful Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.744 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.759 INFO Fetch successful Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.759 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.759 INFO Fetch successful Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.760 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 30 00:43:48.793075 coreos-metadata[2082]: Apr 30 00:43:48.760 INFO Fetch successful Apr 30 00:43:48.735864 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:43:48.672071 ntpd[2089]: ---------------------------------------------------- Apr 30 00:43:48.800270 extend-filesystems[2119]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:43:48.762107 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:43:48.672090 ntpd[2089]: ntp-4 is maintained by Network Time Foundation, Apr 30 00:43:48.775627 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:43:48.672108 ntpd[2089]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 30 00:43:48.780713 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:43:48.672126 ntpd[2089]: corporation. Support and training for ntp-4 are Apr 30 00:43:48.818113 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:43:48.672145 ntpd[2089]: available at https://www.nwtime.org/support Apr 30 00:43:48.818608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:43:48.672163 ntpd[2089]: ---------------------------------------------------- Apr 30 00:43:48.683831 ntpd[2089]: proto: precision = 0.096 usec (-23) Apr 30 00:43:48.832991 jq[2124]: true Apr 30 00:43:48.684291 ntpd[2089]: basedate set to 2025-04-17 Apr 30 00:43:48.684315 ntpd[2089]: gps base set to 2025-04-20 (week 2363) Apr 30 00:43:48.690974 ntpd[2089]: Listen and drop on 0 v6wildcard [::]:123 Apr 30 00:43:48.691049 ntpd[2089]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 30 00:43:48.691305 ntpd[2089]: Listen normally on 2 lo 127.0.0.1:123 Apr 30 00:43:48.691368 ntpd[2089]: Listen normally on 3 eth0 172.31.27.157:123 Apr 30 00:43:48.691435 ntpd[2089]: Listen normally on 4 lo [::1]:123 Apr 30 00:43:48.691512 ntpd[2089]: Listen normally on 5 eth0 [fe80::4c0:7dff:fe29:2e31%2]:123 Apr 30 00:43:48.691573 ntpd[2089]: Listening on routing socket on fd #22 for interface updates Apr 30 00:43:48.724679 ntpd[2089]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:48.724756 ntpd[2089]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 30 00:43:48.841942 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:43:48.844203 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:43:48.850570 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:43:48.868408 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:43:48.871118 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:43:48.914091 update_engine[2121]: I20250430 00:43:48.912173 2121 main.cc:92] Flatcar Update Engine starting Apr 30 00:43:48.935945 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 30 00:43:48.936039 update_engine[2121]: I20250430 00:43:48.931123 2121 update_check_scheduler.cc:74] Next update check in 8m56s Apr 30 00:43:48.960717 jq[2137]: true Apr 30 00:43:48.967512 (ntainerd)[2141]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:43:48.983870 extend-filesystems[2119]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 30 00:43:48.983870 extend-filesystems[2119]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:43:48.983870 extend-filesystems[2119]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 30 00:43:48.982709 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:43:49.011832 extend-filesystems[2086]: Resized filesystem in /dev/nvme0n1p9 Apr 30 00:43:48.992389 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:43:49.013511 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 00:43:49.041145 dbus-daemon[2084]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 00:43:49.043476 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:43:49.052598 tar[2134]: linux-arm64/helm Apr 30 00:43:49.054608 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:43:49.055880 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:43:49.055965 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:43:49.078470 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 00:43:49.082003 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:43:49.082062 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:43:49.085579 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:43:49.115023 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:43:49.185423 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 30 00:43:49.216364 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 30 00:43:49.279771 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2201) Apr 30 00:43:49.290238 bash[2197]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:43:49.298640 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:43:49.318568 systemd-logind[2113]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:43:49.318630 systemd-logind[2113]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 30 00:43:49.319045 systemd-logind[2113]: New seat seat0. Apr 30 00:43:49.368245 systemd[1]: Starting sshkeys.service... Apr 30 00:43:49.374904 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:43:49.431543 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 00:43:49.443228 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 00:43:49.497754 amazon-ssm-agent[2195]: Initializing new seelog logger Apr 30 00:43:49.505715 amazon-ssm-agent[2195]: New Seelog Logger Creation Complete Apr 30 00:43:49.505715 amazon-ssm-agent[2195]: 2025/04/30 00:43:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:49.505715 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:49.517763 amazon-ssm-agent[2195]: 2025/04/30 00:43:49 processing appconfig overrides Apr 30 00:43:49.517763 amazon-ssm-agent[2195]: 2025/04/30 00:43:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:49.517763 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:49.517763 amazon-ssm-agent[2195]: 2025/04/30 00:43:49 processing appconfig overrides Apr 30 00:43:49.524054 amazon-ssm-agent[2195]: 2025-04-30 00:43:49 INFO Proxy environment variables: Apr 30 00:43:49.533041 amazon-ssm-agent[2195]: 2025/04/30 00:43:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:49.533041 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:49.533041 amazon-ssm-agent[2195]: 2025/04/30 00:43:49 processing appconfig overrides Apr 30 00:43:49.551764 amazon-ssm-agent[2195]: 2025/04/30 00:43:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:49.551764 amazon-ssm-agent[2195]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 30 00:43:49.551764 amazon-ssm-agent[2195]: 2025/04/30 00:43:49 processing appconfig overrides Apr 30 00:43:49.628753 amazon-ssm-agent[2195]: 2025-04-30 00:43:49 INFO http_proxy: Apr 30 00:43:49.726697 coreos-metadata[2221]: Apr 30 00:43:49.726 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 30 00:43:49.729073 coreos-metadata[2221]: Apr 30 00:43:49.728 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 30 00:43:49.729999 coreos-metadata[2221]: Apr 30 00:43:49.729 INFO Fetch successful Apr 30 00:43:49.729999 coreos-metadata[2221]: Apr 30 00:43:49.729 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 00:43:49.733395 amazon-ssm-agent[2195]: 2025-04-30 00:43:49 INFO no_proxy: Apr 30 00:43:49.733517 coreos-metadata[2221]: Apr 30 00:43:49.733 INFO Fetch successful Apr 30 00:43:49.740749 unknown[2221]: wrote ssh authorized keys file for user: core Apr 30 00:43:49.831830 update-ssh-keys[2276]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:43:49.838217 amazon-ssm-agent[2195]: 2025-04-30 00:43:49 INFO https_proxy: Apr 30 00:43:49.843477 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 00:43:49.859450 systemd[1]: Finished sshkeys.service. Apr 30 00:43:49.897115 locksmithd[2177]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:43:49.939063 amazon-ssm-agent[2195]: 2025-04-30 00:43:49 INFO Checking if agent identity type OnPrem can be assumed Apr 30 00:43:49.991265 containerd[2141]: time="2025-04-30T00:43:49.991042112Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:43:50.048752 amazon-ssm-agent[2195]: 2025-04-30 00:43:49 INFO Checking if agent identity type EC2 can be assumed Apr 30 00:43:50.141267 dbus-daemon[2084]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 00:43:50.146486 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 00:43:50.156097 dbus-daemon[2084]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2175 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 00:43:50.162870 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO Agent will take identity from EC2 Apr 30 00:43:50.179422 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 00:43:50.263681 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:50.288072 polkitd[2321]: Started polkitd version 121 Apr 30 00:43:50.322589 containerd[2141]: time="2025-04-30T00:43:50.321418734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.335020 containerd[2141]: time="2025-04-30T00:43:50.334905510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.335020 containerd[2141]: time="2025-04-30T00:43:50.334982850Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:43:50.335020 containerd[2141]: time="2025-04-30T00:43:50.335020374Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:43:50.336424 containerd[2141]: time="2025-04-30T00:43:50.335351526Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:43:50.336424 containerd[2141]: time="2025-04-30T00:43:50.335400366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.336424 containerd[2141]: time="2025-04-30T00:43:50.335529222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.336424 containerd[2141]: time="2025-04-30T00:43:50.335558154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.339465 polkitd[2321]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 00:43:50.343029 containerd[2141]: time="2025-04-30T00:43:50.340085394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.343029 containerd[2141]: time="2025-04-30T00:43:50.340138242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.343029 containerd[2141]: time="2025-04-30T00:43:50.340176102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.343029 containerd[2141]: time="2025-04-30T00:43:50.340203642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.343029 containerd[2141]: time="2025-04-30T00:43:50.340432278Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.339584 polkitd[2321]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 00:43:50.347773 containerd[2141]: time="2025-04-30T00:43:50.347669118Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:43:50.348503 containerd[2141]: time="2025-04-30T00:43:50.348066558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:43:50.348503 containerd[2141]: time="2025-04-30T00:43:50.348117030Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:43:50.348503 containerd[2141]: time="2025-04-30T00:43:50.348319734Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:43:50.348503 containerd[2141]: time="2025-04-30T00:43:50.348417594Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:43:50.351110 polkitd[2321]: Finished loading, compiling and executing 2 rules Apr 30 00:43:50.353800 dbus-daemon[2084]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 00:43:50.354123 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 00:43:50.361157 polkitd[2321]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 00:43:50.363053 containerd[2141]: time="2025-04-30T00:43:50.362839422Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:43:50.363053 containerd[2141]: time="2025-04-30T00:43:50.362961054Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:43:50.363222 containerd[2141]: time="2025-04-30T00:43:50.363097446Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:43:50.363222 containerd[2141]: time="2025-04-30T00:43:50.363139842Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:43:50.363222 containerd[2141]: time="2025-04-30T00:43:50.363177018Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:43:50.364083 containerd[2141]: time="2025-04-30T00:43:50.363447822Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:43:50.364399 containerd[2141]: time="2025-04-30T00:43:50.364120350Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:43:50.364399 containerd[2141]: time="2025-04-30T00:43:50.364373490Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:43:50.364664 containerd[2141]: time="2025-04-30T00:43:50.364408458Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:43:50.364664 containerd[2141]: time="2025-04-30T00:43:50.364441290Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:43:50.364664 containerd[2141]: time="2025-04-30T00:43:50.364475658Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.364664 containerd[2141]: time="2025-04-30T00:43:50.364507782Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.364664 containerd[2141]: time="2025-04-30T00:43:50.364546914Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.364664 containerd[2141]: time="2025-04-30T00:43:50.364628898Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.365418 containerd[2141]: time="2025-04-30T00:43:50.364662006Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.365418 containerd[2141]: time="2025-04-30T00:43:50.364695510Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.365987 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:50.368185 containerd[2141]: time="2025-04-30T00:43:50.368099682Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.368185 containerd[2141]: time="2025-04-30T00:43:50.368167962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:43:50.368473 containerd[2141]: time="2025-04-30T00:43:50.368221830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.368473 containerd[2141]: time="2025-04-30T00:43:50.368255874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.368473 containerd[2141]: time="2025-04-30T00:43:50.368286546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.368473 containerd[2141]: time="2025-04-30T00:43:50.368320518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.368473 containerd[2141]: time="2025-04-30T00:43:50.368350974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.368473 containerd[2141]: time="2025-04-30T00:43:50.368383962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.368473 containerd[2141]: time="2025-04-30T00:43:50.368429010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.368473 containerd[2141]: time="2025-04-30T00:43:50.368460018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368493366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368528634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368560470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368589390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368620182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368663022Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368716422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368770758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.370720 containerd[2141]: time="2025-04-30T00:43:50.368799690Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:43:50.373467 containerd[2141]: time="2025-04-30T00:43:50.371563614Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:43:50.373467 containerd[2141]: time="2025-04-30T00:43:50.372295974Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:43:50.373467 containerd[2141]: time="2025-04-30T00:43:50.372334158Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:43:50.373467 containerd[2141]: time="2025-04-30T00:43:50.372365886Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:43:50.373467 containerd[2141]: time="2025-04-30T00:43:50.372391230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.373467 containerd[2141]: time="2025-04-30T00:43:50.372423378Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:43:50.373467 containerd[2141]: time="2025-04-30T00:43:50.372447702Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:43:50.373467 containerd[2141]: time="2025-04-30T00:43:50.372474030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:43:50.379401 containerd[2141]: time="2025-04-30T00:43:50.376163934Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:43:50.379401 containerd[2141]: time="2025-04-30T00:43:50.376951314Z" level=info msg="Connect containerd service" Apr 30 00:43:50.379401 containerd[2141]: time="2025-04-30T00:43:50.377022798Z" level=info msg="using legacy CRI server" Apr 30 00:43:50.379401 containerd[2141]: time="2025-04-30T00:43:50.377045190Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:43:50.379401 containerd[2141]: time="2025-04-30T00:43:50.377861922Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:43:50.385184 containerd[2141]: time="2025-04-30T00:43:50.384432570Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:43:50.385184 containerd[2141]: time="2025-04-30T00:43:50.384656958Z" level=info msg="Start subscribing containerd event" Apr 30 00:43:50.385184 containerd[2141]: time="2025-04-30T00:43:50.384759834Z" level=info msg="Start recovering state" Apr 30 00:43:50.385184 containerd[2141]: time="2025-04-30T00:43:50.384887802Z" level=info msg="Start event monitor" Apr 30 00:43:50.385184 containerd[2141]: time="2025-04-30T00:43:50.384913230Z" level=info msg="Start snapshots syncer" Apr 30 00:43:50.385184 containerd[2141]: time="2025-04-30T00:43:50.384936090Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:43:50.385184 containerd[2141]: time="2025-04-30T00:43:50.384954726Z" level=info msg="Start streaming server" Apr 30 00:43:50.390624 containerd[2141]: time="2025-04-30T00:43:50.389031042Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:43:50.390624 containerd[2141]: time="2025-04-30T00:43:50.389163186Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:43:50.399916 containerd[2141]: time="2025-04-30T00:43:50.398438322Z" level=info msg="containerd successfully booted in 0.418520s" Apr 30 00:43:50.398719 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:43:50.433566 systemd-hostnamed[2175]: Hostname set to (transient) Apr 30 00:43:50.435801 systemd-resolved[2019]: System hostname changed to 'ip-172-31-27-157'. Apr 30 00:43:50.467766 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 30 00:43:50.564181 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 30 00:43:50.664412 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 30 00:43:50.767015 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [amazon-ssm-agent] Starting Core Agent Apr 30 00:43:50.868909 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 30 00:43:50.969836 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [Registrar] Starting registrar module Apr 30 00:43:51.035210 tar[2134]: linux-arm64/LICENSE Apr 30 00:43:51.035210 tar[2134]: linux-arm64/README.md Apr 30 00:43:51.071880 amazon-ssm-agent[2195]: 2025-04-30 00:43:50 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 30 00:43:51.083536 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:43:51.463180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:43:51.479570 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:43:51.647764 amazon-ssm-agent[2195]: 2025-04-30 00:43:51 INFO [EC2Identity] EC2 registration was successful. Apr 30 00:43:51.685283 amazon-ssm-agent[2195]: 2025-04-30 00:43:51 INFO [CredentialRefresher] credentialRefresher has started Apr 30 00:43:51.685283 amazon-ssm-agent[2195]: 2025-04-30 00:43:51 INFO [CredentialRefresher] Starting credentials refresher loop Apr 30 00:43:51.685513 amazon-ssm-agent[2195]: 2025-04-30 00:43:51 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 30 00:43:51.747872 amazon-ssm-agent[2195]: 2025-04-30 00:43:51 INFO [CredentialRefresher] Next credential rotation will be in 30.9749927183 minutes Apr 30 00:43:52.221511 sshd_keygen[2142]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:43:52.279434 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:43:52.293280 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:43:52.319533 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:43:52.320109 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:43:52.331476 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:43:52.363008 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:43:52.378346 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:43:52.384134 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 00:43:52.386989 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:43:52.389228 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:43:52.392898 systemd[1]: Startup finished in 16.369s (kernel) + 10.446s (userspace) = 26.816s. Apr 30 00:43:52.569112 kubelet[2359]: E0430 00:43:52.568951 2359 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:43:52.574175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:43:52.574584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:43:52.711544 amazon-ssm-agent[2195]: 2025-04-30 00:43:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 30 00:43:52.813002 amazon-ssm-agent[2195]: 2025-04-30 00:43:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2392) started Apr 30 00:43:52.914098 amazon-ssm-agent[2195]: 2025-04-30 00:43:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 30 00:43:55.176804 systemd-resolved[2019]: Clock change detected. Flushing caches. Apr 30 00:43:56.395790 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:43:56.404144 systemd[1]: Started sshd@0-172.31.27.157:22-147.75.109.163:47004.service - OpenSSH per-connection server daemon (147.75.109.163:47004). Apr 30 00:43:56.695539 sshd[2401]: Accepted publickey for core from 147.75.109.163 port 47004 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:56.698967 sshd[2401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:56.714415 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:43:56.725075 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:43:56.731003 systemd-logind[2113]: New session 1 of user core. Apr 30 00:43:56.749887 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:43:56.764315 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:43:56.780464 (systemd)[2407]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:43:57.010415 systemd[2407]: Queued start job for default target default.target. Apr 30 00:43:57.011618 systemd[2407]: Created slice app.slice - User Application Slice. Apr 30 00:43:57.011685 systemd[2407]: Reached target paths.target - Paths. Apr 30 00:43:57.011721 systemd[2407]: Reached target timers.target - Timers. Apr 30 00:43:57.019869 systemd[2407]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:43:57.034075 systemd[2407]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:43:57.034189 systemd[2407]: Reached target sockets.target - Sockets. Apr 30 00:43:57.034221 systemd[2407]: Reached target basic.target - Basic System. Apr 30 00:43:57.034302 systemd[2407]: Reached target default.target - Main User Target. Apr 30 00:43:57.034360 systemd[2407]: Startup finished in 242ms. Apr 30 00:43:57.034526 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:43:57.041282 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:43:57.248175 systemd[1]: Started sshd@1-172.31.27.157:22-147.75.109.163:41708.service - OpenSSH per-connection server daemon (147.75.109.163:41708). Apr 30 00:43:57.509217 sshd[2419]: Accepted publickey for core from 147.75.109.163 port 41708 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:57.511770 sshd[2419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:57.520256 systemd-logind[2113]: New session 2 of user core. Apr 30 00:43:57.526276 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:43:57.703720 sshd[2419]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:57.709694 systemd-logind[2113]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:43:57.710835 systemd[1]: sshd@1-172.31.27.157:22-147.75.109.163:41708.service: Deactivated successfully. Apr 30 00:43:57.716322 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:43:57.718207 systemd-logind[2113]: Removed session 2. Apr 30 00:43:57.752077 systemd[1]: Started sshd@2-172.31.27.157:22-147.75.109.163:41712.service - OpenSSH per-connection server daemon (147.75.109.163:41712). Apr 30 00:43:58.004805 sshd[2427]: Accepted publickey for core from 147.75.109.163 port 41712 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:58.007336 sshd[2427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:58.015058 systemd-logind[2113]: New session 3 of user core. Apr 30 00:43:58.027203 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:43:58.192972 sshd[2427]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:58.199038 systemd[1]: sshd@2-172.31.27.157:22-147.75.109.163:41712.service: Deactivated successfully. Apr 30 00:43:58.204618 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:43:58.206255 systemd-logind[2113]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:43:58.208464 systemd-logind[2113]: Removed session 3. Apr 30 00:43:58.239163 systemd[1]: Started sshd@3-172.31.27.157:22-147.75.109.163:41720.service - OpenSSH per-connection server daemon (147.75.109.163:41720). Apr 30 00:43:58.493903 sshd[2436]: Accepted publickey for core from 147.75.109.163 port 41720 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:58.496424 sshd[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:58.504412 systemd-logind[2113]: New session 4 of user core. Apr 30 00:43:58.511197 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:43:58.691367 sshd[2436]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:58.695940 systemd[1]: sshd@3-172.31.27.157:22-147.75.109.163:41720.service: Deactivated successfully. Apr 30 00:43:58.702277 systemd-logind[2113]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:43:58.703528 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:43:58.706258 systemd-logind[2113]: Removed session 4. Apr 30 00:43:58.741539 systemd[1]: Started sshd@4-172.31.27.157:22-147.75.109.163:41732.service - OpenSSH per-connection server daemon (147.75.109.163:41732). Apr 30 00:43:58.994445 sshd[2444]: Accepted publickey for core from 147.75.109.163 port 41732 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:58.996976 sshd[2444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:59.004993 systemd-logind[2113]: New session 5 of user core. Apr 30 00:43:59.012182 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:43:59.169080 sudo[2448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:43:59.169784 sudo[2448]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:43:59.183227 sudo[2448]: pam_unix(sudo:session): session closed for user root Apr 30 00:43:59.221264 sshd[2444]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:59.228913 systemd[1]: sshd@4-172.31.27.157:22-147.75.109.163:41732.service: Deactivated successfully. Apr 30 00:43:59.233593 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:43:59.234223 systemd-logind[2113]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:43:59.237313 systemd-logind[2113]: Removed session 5. Apr 30 00:43:59.272108 systemd[1]: Started sshd@5-172.31.27.157:22-147.75.109.163:41742.service - OpenSSH per-connection server daemon (147.75.109.163:41742). Apr 30 00:43:59.525990 sshd[2453]: Accepted publickey for core from 147.75.109.163 port 41742 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:43:59.528382 sshd[2453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:43:59.536451 systemd-logind[2113]: New session 6 of user core. Apr 30 00:43:59.546271 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:43:59.686175 sudo[2458]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:43:59.687367 sudo[2458]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:43:59.693428 sudo[2458]: pam_unix(sudo:session): session closed for user root Apr 30 00:43:59.703196 sudo[2457]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:43:59.703832 sudo[2457]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:43:59.725160 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:43:59.737587 auditctl[2461]: No rules Apr 30 00:43:59.738423 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:43:59.738991 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:43:59.755534 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:43:59.796354 augenrules[2480]: No rules Apr 30 00:43:59.800131 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:43:59.803645 sudo[2457]: pam_unix(sudo:session): session closed for user root Apr 30 00:43:59.843272 sshd[2453]: pam_unix(sshd:session): session closed for user core Apr 30 00:43:59.849348 systemd[1]: sshd@5-172.31.27.157:22-147.75.109.163:41742.service: Deactivated successfully. Apr 30 00:43:59.855525 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:43:59.856991 systemd-logind[2113]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:43:59.858712 systemd-logind[2113]: Removed session 6. Apr 30 00:43:59.890151 systemd[1]: Started sshd@6-172.31.27.157:22-147.75.109.163:41746.service - OpenSSH per-connection server daemon (147.75.109.163:41746). Apr 30 00:44:00.145896 sshd[2489]: Accepted publickey for core from 147.75.109.163 port 41746 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:44:00.149373 sshd[2489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:44:00.158139 systemd-logind[2113]: New session 7 of user core. Apr 30 00:44:00.164134 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:44:00.305841 sudo[2493]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:44:00.306455 sudo[2493]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:44:00.728079 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:44:00.729487 (dockerd)[2509]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:44:01.078148 dockerd[2509]: time="2025-04-30T00:44:01.077619717Z" level=info msg="Starting up" Apr 30 00:44:01.441436 dockerd[2509]: time="2025-04-30T00:44:01.441357455Z" level=info msg="Loading containers: start." Apr 30 00:44:01.597706 kernel: Initializing XFRM netlink socket Apr 30 00:44:01.631938 (udev-worker)[2531]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:44:01.719552 systemd-networkd[1686]: docker0: Link UP Apr 30 00:44:01.748120 dockerd[2509]: time="2025-04-30T00:44:01.748055472Z" level=info msg="Loading containers: done." Apr 30 00:44:01.783035 dockerd[2509]: time="2025-04-30T00:44:01.782946168Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:44:01.783677 dockerd[2509]: time="2025-04-30T00:44:01.783366312Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:44:01.783973 dockerd[2509]: time="2025-04-30T00:44:01.783807792Z" level=info msg="Daemon has completed initialization" Apr 30 00:44:01.784074 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck86110494-merged.mount: Deactivated successfully. Apr 30 00:44:01.833944 dockerd[2509]: time="2025-04-30T00:44:01.833623249Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:44:01.834752 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:44:02.159188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:44:02.169013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:02.493913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:02.509372 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:02.611343 kubelet[2662]: E0430 00:44:02.611255 2662 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:02.619705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:02.620186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:03.215017 containerd[2141]: time="2025-04-30T00:44:03.214893335Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:44:03.824308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12151855.mount: Deactivated successfully. Apr 30 00:44:05.194692 containerd[2141]: time="2025-04-30T00:44:05.193433137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:05.196012 containerd[2141]: time="2025-04-30T00:44:05.195945253Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" Apr 30 00:44:05.197895 containerd[2141]: time="2025-04-30T00:44:05.197826025Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:05.203079 containerd[2141]: time="2025-04-30T00:44:05.203029513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:05.205477 containerd[2141]: time="2025-04-30T00:44:05.205415041Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.990462234s" Apr 30 00:44:05.205607 containerd[2141]: time="2025-04-30T00:44:05.205478365Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 00:44:05.244980 containerd[2141]: time="2025-04-30T00:44:05.244932242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:44:06.747311 containerd[2141]: time="2025-04-30T00:44:06.746996021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:06.749119 containerd[2141]: time="2025-04-30T00:44:06.749066021Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" Apr 30 00:44:06.750362 containerd[2141]: time="2025-04-30T00:44:06.749517173Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:06.755155 containerd[2141]: time="2025-04-30T00:44:06.755092361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:06.757679 containerd[2141]: time="2025-04-30T00:44:06.757594541Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.512397411s" Apr 30 00:44:06.757800 containerd[2141]: time="2025-04-30T00:44:06.757656497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 00:44:06.798263 containerd[2141]: time="2025-04-30T00:44:06.797680421Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:44:07.880697 containerd[2141]: time="2025-04-30T00:44:07.880611787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:07.882723 containerd[2141]: time="2025-04-30T00:44:07.882619507Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" Apr 30 00:44:07.883443 containerd[2141]: time="2025-04-30T00:44:07.883362199Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:07.894706 containerd[2141]: time="2025-04-30T00:44:07.894419323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:07.896201 containerd[2141]: time="2025-04-30T00:44:07.895851331Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.097372646s" Apr 30 00:44:07.896201 containerd[2141]: time="2025-04-30T00:44:07.895913467Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 00:44:07.934293 containerd[2141]: time="2025-04-30T00:44:07.934237891Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:44:09.176252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2081074189.mount: Deactivated successfully. Apr 30 00:44:09.723756 containerd[2141]: time="2025-04-30T00:44:09.723700160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:09.726050 containerd[2141]: time="2025-04-30T00:44:09.725975372Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" Apr 30 00:44:09.728482 containerd[2141]: time="2025-04-30T00:44:09.728409332Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:09.733864 containerd[2141]: time="2025-04-30T00:44:09.733816160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:09.734901 containerd[2141]: time="2025-04-30T00:44:09.734616188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.800315345s" Apr 30 00:44:09.734901 containerd[2141]: time="2025-04-30T00:44:09.734703872Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:44:09.771311 containerd[2141]: time="2025-04-30T00:44:09.771253736Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:44:10.391252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542646475.mount: Deactivated successfully. Apr 30 00:44:11.556731 containerd[2141]: time="2025-04-30T00:44:11.556125981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.558604 containerd[2141]: time="2025-04-30T00:44:11.558537573Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Apr 30 00:44:11.561330 containerd[2141]: time="2025-04-30T00:44:11.561254145Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.567619 containerd[2141]: time="2025-04-30T00:44:11.567540789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:11.570258 containerd[2141]: time="2025-04-30T00:44:11.569849877Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.798531605s" Apr 30 00:44:11.570258 containerd[2141]: time="2025-04-30T00:44:11.569904933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:44:11.606322 containerd[2141]: time="2025-04-30T00:44:11.606257529Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:44:12.134483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392820750.mount: Deactivated successfully. Apr 30 00:44:12.147729 containerd[2141]: time="2025-04-30T00:44:12.147481580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.149914 containerd[2141]: time="2025-04-30T00:44:12.149847992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Apr 30 00:44:12.152376 containerd[2141]: time="2025-04-30T00:44:12.152304512Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.157351 containerd[2141]: time="2025-04-30T00:44:12.157260476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:12.159793 containerd[2141]: time="2025-04-30T00:44:12.158983400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 552.664863ms" Apr 30 00:44:12.159793 containerd[2141]: time="2025-04-30T00:44:12.159041180Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 00:44:12.196504 containerd[2141]: time="2025-04-30T00:44:12.196164356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:44:12.659211 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:44:12.667066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:12.797848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585195758.mount: Deactivated successfully. Apr 30 00:44:13.041053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:13.060412 (kubelet)[2844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:13.228702 kubelet[2844]: E0430 00:44:13.226442 2844 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:13.234857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:13.235352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:16.106272 containerd[2141]: time="2025-04-30T00:44:16.106195115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.108560 containerd[2141]: time="2025-04-30T00:44:16.108479915Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Apr 30 00:44:16.110774 containerd[2141]: time="2025-04-30T00:44:16.110703695Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.117025 containerd[2141]: time="2025-04-30T00:44:16.116948640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:44:16.119462 containerd[2141]: time="2025-04-30T00:44:16.119413644Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.923192648s" Apr 30 00:44:16.119772 containerd[2141]: time="2025-04-30T00:44:16.119604600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 00:44:19.977454 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 00:44:23.409211 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 30 00:44:23.419451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:23.716964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:23.733315 (kubelet)[2960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:44:23.815428 kubelet[2960]: E0430 00:44:23.815194 2960 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:44:23.821955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:44:23.822336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:44:25.460873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:25.474126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:25.515258 systemd[1]: Reloading requested from client PID 2977 ('systemctl') (unit session-7.scope)... Apr 30 00:44:25.515465 systemd[1]: Reloading... Apr 30 00:44:25.730694 zram_generator::config[3017]: No configuration found. Apr 30 00:44:25.984692 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:44:26.157496 systemd[1]: Reloading finished in 641 ms. Apr 30 00:44:26.248201 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:44:26.248803 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:44:26.249847 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:26.266643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:26.547172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:26.554045 (kubelet)[3092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:44:26.634588 kubelet[3092]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:26.634588 kubelet[3092]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:44:26.634588 kubelet[3092]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:26.635231 kubelet[3092]: I0430 00:44:26.634751 3092 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:44:28.291331 kubelet[3092]: I0430 00:44:28.290043 3092 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:44:28.291331 kubelet[3092]: I0430 00:44:28.290085 3092 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:44:28.291331 kubelet[3092]: I0430 00:44:28.290395 3092 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:44:28.318219 kubelet[3092]: E0430 00:44:28.318169 3092 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.157:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.318865 kubelet[3092]: I0430 00:44:28.318824 3092 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:44:28.335414 kubelet[3092]: I0430 00:44:28.335370 3092 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:44:28.336194 kubelet[3092]: I0430 00:44:28.336140 3092 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:44:28.336472 kubelet[3092]: I0430 00:44:28.336196 3092 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-157","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:44:28.336642 kubelet[3092]: I0430 00:44:28.336501 3092 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:44:28.336642 kubelet[3092]: I0430 00:44:28.336521 3092 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:44:28.336794 kubelet[3092]: I0430 00:44:28.336755 3092 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:28.338494 kubelet[3092]: I0430 00:44:28.338441 3092 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:44:28.338494 kubelet[3092]: I0430 00:44:28.338489 3092 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:44:28.338655 kubelet[3092]: I0430 00:44:28.338618 3092 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:44:28.338747 kubelet[3092]: I0430 00:44:28.338707 3092 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:44:28.340715 kubelet[3092]: I0430 00:44:28.340241 3092 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:44:28.340715 kubelet[3092]: I0430 00:44:28.340609 3092 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:44:28.340938 kubelet[3092]: W0430 00:44:28.340754 3092 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:44:28.342561 kubelet[3092]: I0430 00:44:28.341818 3092 server.go:1264] "Started kubelet" Apr 30 00:44:28.350106 kubelet[3092]: I0430 00:44:28.350068 3092 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:44:28.352712 kubelet[3092]: W0430 00:44:28.352392 3092 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-157&limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.352712 kubelet[3092]: E0430 00:44:28.352558 3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-157&limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.353525 kubelet[3092]: E0430 00:44:28.352729 3092 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.157:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.157:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-157.183af1fb91a072a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-157,UID:ip-172-31-27-157,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-157,},FirstTimestamp:2025-04-30 00:44:28.341785248 +0000 UTC m=+1.781245402,LastTimestamp:2025-04-30 00:44:28.341785248 +0000 UTC m=+1.781245402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-157,}" Apr 30 00:44:28.354194 kubelet[3092]: W0430 00:44:28.353758 3092 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.354194 kubelet[3092]: E0430 00:44:28.354085 3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.361707 kubelet[3092]: I0430 00:44:28.361071 3092 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:44:28.362353 kubelet[3092]: I0430 00:44:28.362304 3092 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:44:28.363219 kubelet[3092]: I0430 00:44:28.363182 3092 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:44:28.365099 kubelet[3092]: I0430 00:44:28.365021 3092 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:44:28.365579 kubelet[3092]: I0430 00:44:28.365552 3092 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:44:28.366771 kubelet[3092]: E0430 00:44:28.366648 3092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-157?timeout=10s\": dial tcp 172.31.27.157:6443: connect: connection refused" interval="200ms" Apr 30 00:44:28.367306 kubelet[3092]: I0430 00:44:28.367275 3092 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:44:28.367612 kubelet[3092]: I0430 00:44:28.367567 3092 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:44:28.367729 kubelet[3092]: I0430 00:44:28.367709 3092 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:44:28.371723 kubelet[3092]: E0430 00:44:28.370888 3092 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:44:28.371723 kubelet[3092]: I0430 00:44:28.367561 3092 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:44:28.372453 kubelet[3092]: W0430 00:44:28.371401 3092 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.372627 kubelet[3092]: E0430 00:44:28.372601 3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.374471 kubelet[3092]: I0430 00:44:28.374437 3092 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:44:28.385427 kubelet[3092]: I0430 00:44:28.383168 3092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:44:28.385427 kubelet[3092]: I0430 00:44:28.385301 3092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:44:28.385427 kubelet[3092]: I0430 00:44:28.385399 3092 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:44:28.385427 kubelet[3092]: I0430 00:44:28.385433 3092 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:44:28.385753 kubelet[3092]: E0430 00:44:28.385501 3092 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:44:28.408589 kubelet[3092]: W0430 00:44:28.408493 3092 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.418233 kubelet[3092]: E0430 00:44:28.418160 3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:28.438114 kubelet[3092]: I0430 00:44:28.438075 3092 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:44:28.438489 kubelet[3092]: I0430 00:44:28.438442 3092 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:44:28.438615 kubelet[3092]: I0430 00:44:28.438582 3092 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:28.442967 kubelet[3092]: I0430 00:44:28.442939 3092 policy_none.go:49] "None policy: Start" Apr 30 00:44:28.444279 kubelet[3092]: I0430 00:44:28.444245 3092 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:44:28.444407 kubelet[3092]: I0430 00:44:28.444292 3092 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:44:28.454311 kubelet[3092]: I0430 00:44:28.454250 3092 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:44:28.454694 kubelet[3092]: I0430 00:44:28.454540 3092 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:44:28.454813 kubelet[3092]: I0430 00:44:28.454789 3092 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:44:28.466202 kubelet[3092]: I0430 00:44:28.466117 3092 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-157" Apr 30 00:44:28.466730 kubelet[3092]: E0430 00:44:28.466655 3092 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.157:6443/api/v1/nodes\": dial tcp 172.31.27.157:6443: connect: connection refused" node="ip-172-31-27-157" Apr 30 00:44:28.466883 kubelet[3092]: E0430 00:44:28.466853 3092 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-157\" not found" Apr 30 00:44:28.486613 kubelet[3092]: I0430 00:44:28.486531 3092 topology_manager.go:215] "Topology Admit Handler" podUID="79291519c3a4d69593ce1dfec189fb6f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-157" Apr 30 00:44:28.488984 kubelet[3092]: I0430 00:44:28.488527 3092 topology_manager.go:215] "Topology Admit Handler" podUID="8b1529970624d1702729141278332294" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:28.492820 kubelet[3092]: I0430 00:44:28.490888 3092 topology_manager.go:215] "Topology Admit Handler" podUID="819222ca88d6ba189f7bd09b49f8a38a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-157" Apr 30 00:44:28.567713 kubelet[3092]: E0430 00:44:28.567544 3092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-157?timeout=10s\": dial tcp 172.31.27.157:6443: connect: connection refused" interval="400ms" Apr 30 00:44:28.568559 kubelet[3092]: I0430 00:44:28.568446 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:28.568768 kubelet[3092]: I0430 00:44:28.568742 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/819222ca88d6ba189f7bd09b49f8a38a-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-157\" (UID: \"819222ca88d6ba189f7bd09b49f8a38a\") " pod="kube-system/kube-scheduler-ip-172-31-27-157" Apr 30 00:44:28.568979 kubelet[3092]: I0430 00:44:28.568955 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79291519c3a4d69593ce1dfec189fb6f-ca-certs\") pod \"kube-apiserver-ip-172-31-27-157\" (UID: \"79291519c3a4d69593ce1dfec189fb6f\") " pod="kube-system/kube-apiserver-ip-172-31-27-157" Apr 30 00:44:28.569183 kubelet[3092]: I0430 00:44:28.569159 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:28.569376 kubelet[3092]: I0430 00:44:28.569350 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:28.569582 kubelet[3092]: I0430 00:44:28.569556 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79291519c3a4d69593ce1dfec189fb6f-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-157\" (UID: \"79291519c3a4d69593ce1dfec189fb6f\") " pod="kube-system/kube-apiserver-ip-172-31-27-157" Apr 30 00:44:28.569739 kubelet[3092]: I0430 00:44:28.569715 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79291519c3a4d69593ce1dfec189fb6f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-157\" (UID: \"79291519c3a4d69593ce1dfec189fb6f\") " pod="kube-system/kube-apiserver-ip-172-31-27-157" Apr 30 00:44:28.569926 kubelet[3092]: I0430 00:44:28.569902 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:28.570150 kubelet[3092]: I0430 00:44:28.570078 3092 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:28.669233 kubelet[3092]: I0430 00:44:28.669198 3092 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-157" Apr 30 00:44:28.670042 kubelet[3092]: E0430 00:44:28.669991 3092 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.157:6443/api/v1/nodes\": dial tcp 172.31.27.157:6443: connect: connection refused" node="ip-172-31-27-157" Apr 30 00:44:28.799527 containerd[2141]: time="2025-04-30T00:44:28.799445643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-157,Uid:79291519c3a4d69593ce1dfec189fb6f,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:28.801406 containerd[2141]: time="2025-04-30T00:44:28.801222963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-157,Uid:8b1529970624d1702729141278332294,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:28.806231 containerd[2141]: time="2025-04-30T00:44:28.806161311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-157,Uid:819222ca88d6ba189f7bd09b49f8a38a,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:28.969279 kubelet[3092]: E0430 00:44:28.969204 3092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-157?timeout=10s\": dial tcp 172.31.27.157:6443: connect: connection refused" interval="800ms" Apr 30 00:44:29.072704 kubelet[3092]: I0430 00:44:29.072649 3092 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-157" Apr 30 00:44:29.073617 kubelet[3092]: E0430 00:44:29.073563 3092 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.157:6443/api/v1/nodes\": dial tcp 172.31.27.157:6443: connect: connection refused" node="ip-172-31-27-157" Apr 30 00:44:29.339614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505048092.mount: Deactivated successfully. Apr 30 00:44:29.357315 containerd[2141]: time="2025-04-30T00:44:29.356401417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:29.358380 containerd[2141]: time="2025-04-30T00:44:29.358325737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:44:29.360653 containerd[2141]: time="2025-04-30T00:44:29.360587449Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:29.362716 containerd[2141]: time="2025-04-30T00:44:29.362513797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:29.365537 containerd[2141]: time="2025-04-30T00:44:29.365383177Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:29.367223 containerd[2141]: time="2025-04-30T00:44:29.367076017Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:44:29.369542 containerd[2141]: time="2025-04-30T00:44:29.369436849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 30 00:44:29.372844 containerd[2141]: time="2025-04-30T00:44:29.372780037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:44:29.376078 containerd[2141]: time="2025-04-30T00:44:29.375701149Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 574.338674ms" Apr 30 00:44:29.379543 containerd[2141]: time="2025-04-30T00:44:29.379479961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.206918ms" Apr 30 00:44:29.402719 kubelet[3092]: W0430 00:44:29.401238 3092 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:29.402719 kubelet[3092]: E0430 00:44:29.401362 3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.157:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:29.426644 containerd[2141]: time="2025-04-30T00:44:29.425801594Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 625.704243ms" Apr 30 00:44:29.534834 kubelet[3092]: W0430 00:44:29.534615 3092 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:29.534834 kubelet[3092]: E0430 00:44:29.534764 3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.157:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:29.580740 containerd[2141]: time="2025-04-30T00:44:29.579931094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:29.580740 containerd[2141]: time="2025-04-30T00:44:29.580066814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:29.580740 containerd[2141]: time="2025-04-30T00:44:29.580107398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:29.586821 containerd[2141]: time="2025-04-30T00:44:29.586566614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:29.597529 containerd[2141]: time="2025-04-30T00:44:29.597192878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:29.597529 containerd[2141]: time="2025-04-30T00:44:29.597312326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:29.601279 containerd[2141]: time="2025-04-30T00:44:29.598102910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:29.601279 containerd[2141]: time="2025-04-30T00:44:29.598283378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:29.603378 containerd[2141]: time="2025-04-30T00:44:29.603208430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:29.603783 containerd[2141]: time="2025-04-30T00:44:29.603652455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:29.604107 containerd[2141]: time="2025-04-30T00:44:29.603916011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:29.606564 containerd[2141]: time="2025-04-30T00:44:29.606445383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:29.689307 kubelet[3092]: W0430 00:44:29.689167 3092 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-157&limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:29.689742 kubelet[3092]: E0430 00:44:29.689584 3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.157:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-157&limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:29.732752 containerd[2141]: time="2025-04-30T00:44:29.732413787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-157,Uid:79291519c3a4d69593ce1dfec189fb6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2a7aa87b512a68f54f2981b265aed417efbefa0e73bb44ecc6aad706336dd94\"" Apr 30 00:44:29.745997 containerd[2141]: time="2025-04-30T00:44:29.745933443Z" level=info msg="CreateContainer within sandbox \"d2a7aa87b512a68f54f2981b265aed417efbefa0e73bb44ecc6aad706336dd94\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:44:29.768283 containerd[2141]: time="2025-04-30T00:44:29.767814087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-157,Uid:819222ca88d6ba189f7bd09b49f8a38a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1be3abfb5cf9c04424c6d38c0d065b4669cb3b37494e2c78c6448c7835612340\"" Apr 30 00:44:29.770485 kubelet[3092]: E0430 00:44:29.770435 3092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-157?timeout=10s\": dial tcp 172.31.27.157:6443: connect: connection refused" interval="1.6s" Apr 30 00:44:29.773205 containerd[2141]: time="2025-04-30T00:44:29.773146083Z" level=info msg="CreateContainer within sandbox \"1be3abfb5cf9c04424c6d38c0d065b4669cb3b37494e2c78c6448c7835612340\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:44:29.780376 containerd[2141]: time="2025-04-30T00:44:29.779920119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-157,Uid:8b1529970624d1702729141278332294,Namespace:kube-system,Attempt:0,} returns sandbox id \"26a5dbf56b6307b05b0ffaa06d178f182093538e2b0d08da54ff53d79bbfe347\"" Apr 30 00:44:29.786418 containerd[2141]: time="2025-04-30T00:44:29.786352167Z" level=info msg="CreateContainer within sandbox \"26a5dbf56b6307b05b0ffaa06d178f182093538e2b0d08da54ff53d79bbfe347\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:44:29.796253 containerd[2141]: time="2025-04-30T00:44:29.796193163Z" level=info msg="CreateContainer within sandbox \"d2a7aa87b512a68f54f2981b265aed417efbefa0e73bb44ecc6aad706336dd94\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ff58bd8db97da6f0ea8faa9cb7c072da8971b2e145e95cd923cc07a7bbdce112\"" Apr 30 00:44:29.797420 containerd[2141]: time="2025-04-30T00:44:29.797371155Z" level=info msg="StartContainer for \"ff58bd8db97da6f0ea8faa9cb7c072da8971b2e145e95cd923cc07a7bbdce112\"" Apr 30 00:44:29.815799 containerd[2141]: time="2025-04-30T00:44:29.815491744Z" level=info msg="CreateContainer within sandbox \"1be3abfb5cf9c04424c6d38c0d065b4669cb3b37494e2c78c6448c7835612340\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"06d16203f0decee5694c8dd6b8850b593771780f48597c6c436f026b7db894ef\"" Apr 30 00:44:29.817438 containerd[2141]: time="2025-04-30T00:44:29.816809296Z" level=info msg="StartContainer for \"06d16203f0decee5694c8dd6b8850b593771780f48597c6c436f026b7db894ef\"" Apr 30 00:44:29.835049 containerd[2141]: time="2025-04-30T00:44:29.834976444Z" level=info msg="CreateContainer within sandbox \"26a5dbf56b6307b05b0ffaa06d178f182093538e2b0d08da54ff53d79bbfe347\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6191946b84b69dc326934250cab35a785e01e0201133bcbd4ada2b4656b9e91f\"" Apr 30 00:44:29.836192 containerd[2141]: time="2025-04-30T00:44:29.836130532Z" level=info msg="StartContainer for \"6191946b84b69dc326934250cab35a785e01e0201133bcbd4ada2b4656b9e91f\"" Apr 30 00:44:29.844703 kubelet[3092]: W0430 00:44:29.844086 3092 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:29.844703 kubelet[3092]: E0430 00:44:29.844168 3092 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.157:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.157:6443: connect: connection refused Apr 30 00:44:29.879746 kubelet[3092]: I0430 00:44:29.879243 3092 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-157" Apr 30 00:44:29.880729 kubelet[3092]: E0430 00:44:29.880432 3092 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.157:6443/api/v1/nodes\": dial tcp 172.31.27.157:6443: connect: connection refused" node="ip-172-31-27-157" Apr 30 00:44:29.963591 containerd[2141]: time="2025-04-30T00:44:29.963520312Z" level=info msg="StartContainer for \"ff58bd8db97da6f0ea8faa9cb7c072da8971b2e145e95cd923cc07a7bbdce112\" returns successfully" Apr 30 00:44:30.076532 containerd[2141]: time="2025-04-30T00:44:30.076197649Z" level=info msg="StartContainer for \"06d16203f0decee5694c8dd6b8850b593771780f48597c6c436f026b7db894ef\" returns successfully" Apr 30 00:44:30.134948 containerd[2141]: time="2025-04-30T00:44:30.133408165Z" level=info msg="StartContainer for \"6191946b84b69dc326934250cab35a785e01e0201133bcbd4ada2b4656b9e91f\" returns successfully" Apr 30 00:44:31.484391 kubelet[3092]: I0430 00:44:31.484014 3092 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-157" Apr 30 00:44:34.097703 update_engine[2121]: I20250430 00:44:34.094708 2121 update_attempter.cc:509] Updating boot flags... Apr 30 00:44:34.327261 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3378) Apr 30 00:44:34.962722 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3380) Apr 30 00:44:36.065110 kubelet[3092]: E0430 00:44:36.065057 3092 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-157\" not found" node="ip-172-31-27-157" Apr 30 00:44:36.180016 kubelet[3092]: E0430 00:44:36.179870 3092 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-157.183af1fb91a072a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-157,UID:ip-172-31-27-157,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-157,},FirstTimestamp:2025-04-30 00:44:28.341785248 +0000 UTC m=+1.781245402,LastTimestamp:2025-04-30 00:44:28.341785248 +0000 UTC m=+1.781245402,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-157,}" Apr 30 00:44:36.214533 kubelet[3092]: I0430 00:44:36.214490 3092 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-157" Apr 30 00:44:36.264700 kubelet[3092]: E0430 00:44:36.262437 3092 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-157.183af1fb935c361c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-157,UID:ip-172-31-27-157,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-27-157,},FirstTimestamp:2025-04-30 00:44:28.37086774 +0000 UTC m=+1.810327918,LastTimestamp:2025-04-30 00:44:28.37086774 +0000 UTC m=+1.810327918,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-157,}" Apr 30 00:44:36.345749 kubelet[3092]: I0430 00:44:36.345583 3092 apiserver.go:52] "Watching apiserver" Apr 30 00:44:36.371849 kubelet[3092]: I0430 00:44:36.369852 3092 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:44:38.018413 systemd[1]: Reloading requested from client PID 3548 ('systemctl') (unit session-7.scope)... Apr 30 00:44:38.018438 systemd[1]: Reloading... Apr 30 00:44:38.241851 zram_generator::config[3589]: No configuration found. Apr 30 00:44:38.502900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:44:38.701066 systemd[1]: Reloading finished in 681 ms. Apr 30 00:44:38.766025 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:38.782354 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:44:38.783120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:38.797422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:44:39.102869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:44:39.123000 (kubelet)[3658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:44:39.234186 kubelet[3658]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:39.234186 kubelet[3658]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:44:39.234186 kubelet[3658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:44:39.235779 kubelet[3658]: I0430 00:44:39.235267 3658 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:44:39.249253 kubelet[3658]: I0430 00:44:39.249188 3658 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:44:39.250040 kubelet[3658]: I0430 00:44:39.249440 3658 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:44:39.250040 kubelet[3658]: I0430 00:44:39.250034 3658 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:44:39.254274 kubelet[3658]: I0430 00:44:39.253813 3658 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:44:39.256865 kubelet[3658]: I0430 00:44:39.256121 3658 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:44:39.266192 sudo[3671]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:44:39.267063 sudo[3671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:44:39.271331 kubelet[3658]: I0430 00:44:39.271273 3658 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:44:39.272710 kubelet[3658]: I0430 00:44:39.272558 3658 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:44:39.273002 kubelet[3658]: I0430 00:44:39.272628 3658 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-157","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:44:39.273154 kubelet[3658]: I0430 00:44:39.273005 3658 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:44:39.273154 kubelet[3658]: I0430 00:44:39.273026 3658 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:44:39.273154 kubelet[3658]: I0430 00:44:39.273088 3658 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:39.273324 kubelet[3658]: I0430 00:44:39.273271 3658 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:44:39.273324 kubelet[3658]: I0430 00:44:39.273294 3658 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:44:39.273460 kubelet[3658]: I0430 00:44:39.273344 3658 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:44:39.273460 kubelet[3658]: I0430 00:44:39.273379 3658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:44:39.280695 kubelet[3658]: I0430 00:44:39.277879 3658 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:44:39.280695 kubelet[3658]: I0430 00:44:39.278162 3658 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:44:39.282699 kubelet[3658]: I0430 00:44:39.281099 3658 server.go:1264] "Started kubelet" Apr 30 00:44:39.288001 kubelet[3658]: I0430 00:44:39.287936 3658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:44:39.299328 kubelet[3658]: I0430 00:44:39.299269 3658 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:44:39.302334 kubelet[3658]: I0430 00:44:39.302301 3658 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:44:39.307781 kubelet[3658]: I0430 00:44:39.307648 3658 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:44:39.308276 kubelet[3658]: I0430 00:44:39.308251 3658 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:44:39.318153 kubelet[3658]: I0430 00:44:39.318119 3658 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:44:39.334294 kubelet[3658]: I0430 00:44:39.331486 3658 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:44:39.345065 kubelet[3658]: I0430 00:44:39.345033 3658 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:44:39.356409 kubelet[3658]: I0430 00:44:39.356059 3658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:44:39.362716 kubelet[3658]: I0430 00:44:39.360912 3658 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:44:39.362716 kubelet[3658]: I0430 00:44:39.361069 3658 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:44:39.373420 kubelet[3658]: I0430 00:44:39.369823 3658 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:44:39.374870 kubelet[3658]: I0430 00:44:39.374823 3658 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:44:39.378635 kubelet[3658]: I0430 00:44:39.376772 3658 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:44:39.378635 kubelet[3658]: E0430 00:44:39.376988 3658 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:44:39.382467 kubelet[3658]: I0430 00:44:39.381195 3658 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:44:39.401628 kubelet[3658]: E0430 00:44:39.398005 3658 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:44:39.432951 kubelet[3658]: I0430 00:44:39.432894 3658 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-157" Apr 30 00:44:39.454649 kubelet[3658]: I0430 00:44:39.451494 3658 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-157" Apr 30 00:44:39.454793 kubelet[3658]: I0430 00:44:39.454749 3658 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-157" Apr 30 00:44:39.477156 kubelet[3658]: E0430 00:44:39.477104 3658 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:44:39.608936 kubelet[3658]: I0430 00:44:39.607927 3658 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:44:39.608936 kubelet[3658]: I0430 00:44:39.607958 3658 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:44:39.608936 kubelet[3658]: I0430 00:44:39.607993 3658 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:44:39.608936 kubelet[3658]: I0430 00:44:39.608229 3658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:44:39.608936 kubelet[3658]: I0430 00:44:39.608249 3658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:44:39.608936 kubelet[3658]: I0430 00:44:39.608286 3658 policy_none.go:49] "None policy: Start" Apr 30 00:44:39.611514 kubelet[3658]: I0430 00:44:39.611475 3658 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:44:39.611640 kubelet[3658]: I0430 00:44:39.611528 3658 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:44:39.611932 kubelet[3658]: I0430 00:44:39.611903 3658 state_mem.go:75] "Updated machine memory state" Apr 30 00:44:39.615480 kubelet[3658]: I0430 00:44:39.615393 3658 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:44:39.615768 kubelet[3658]: I0430 00:44:39.615704 3658 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:44:39.620029 kubelet[3658]: I0430 00:44:39.619484 3658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:44:39.677352 kubelet[3658]: I0430 00:44:39.677288 3658 topology_manager.go:215] "Topology Admit Handler" podUID="819222ca88d6ba189f7bd09b49f8a38a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-157" Apr 30 00:44:39.677509 kubelet[3658]: I0430 00:44:39.677471 3658 topology_manager.go:215] "Topology Admit Handler" podUID="79291519c3a4d69593ce1dfec189fb6f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-157" Apr 30 00:44:39.677567 kubelet[3658]: I0430 00:44:39.677544 3658 topology_manager.go:215] "Topology Admit Handler" podUID="8b1529970624d1702729141278332294" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:39.689194 kubelet[3658]: E0430 00:44:39.688633 3658 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-27-157\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:39.852263 kubelet[3658]: I0430 00:44:39.851615 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:39.852263 kubelet[3658]: I0430 00:44:39.851698 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:39.852263 kubelet[3658]: I0430 00:44:39.851761 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79291519c3a4d69593ce1dfec189fb6f-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-157\" (UID: \"79291519c3a4d69593ce1dfec189fb6f\") " pod="kube-system/kube-apiserver-ip-172-31-27-157" Apr 30 00:44:39.852263 kubelet[3658]: I0430 00:44:39.851808 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79291519c3a4d69593ce1dfec189fb6f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-157\" (UID: \"79291519c3a4d69593ce1dfec189fb6f\") " pod="kube-system/kube-apiserver-ip-172-31-27-157" Apr 30 00:44:39.852263 kubelet[3658]: I0430 00:44:39.851848 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:39.852642 kubelet[3658]: I0430 00:44:39.851886 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:39.852642 kubelet[3658]: I0430 00:44:39.851921 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b1529970624d1702729141278332294-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-157\" (UID: \"8b1529970624d1702729141278332294\") " pod="kube-system/kube-controller-manager-ip-172-31-27-157" Apr 30 00:44:39.852642 kubelet[3658]: I0430 00:44:39.851960 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/819222ca88d6ba189f7bd09b49f8a38a-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-157\" (UID: \"819222ca88d6ba189f7bd09b49f8a38a\") " pod="kube-system/kube-scheduler-ip-172-31-27-157" Apr 30 00:44:39.852642 kubelet[3658]: I0430 00:44:39.851997 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79291519c3a4d69593ce1dfec189fb6f-ca-certs\") pod \"kube-apiserver-ip-172-31-27-157\" (UID: \"79291519c3a4d69593ce1dfec189fb6f\") " pod="kube-system/kube-apiserver-ip-172-31-27-157" Apr 30 00:44:40.181971 sudo[3671]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:40.294446 kubelet[3658]: I0430 00:44:40.294366 3658 apiserver.go:52] "Watching apiserver" Apr 30 00:44:40.356898 kubelet[3658]: I0430 00:44:40.356745 3658 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:44:40.497460 kubelet[3658]: I0430 00:44:40.496688 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-157" podStartSLOduration=1.496639597 podStartE2EDuration="1.496639597s" podCreationTimestamp="2025-04-30 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:40.483280837 +0000 UTC m=+1.352921180" watchObservedRunningTime="2025-04-30 00:44:40.496639597 +0000 UTC m=+1.366279940" Apr 30 00:44:40.512694 kubelet[3658]: I0430 00:44:40.511837 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-157" podStartSLOduration=2.511811857 podStartE2EDuration="2.511811857s" podCreationTimestamp="2025-04-30 00:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:40.497450365 +0000 UTC m=+1.367090672" watchObservedRunningTime="2025-04-30 00:44:40.511811857 +0000 UTC m=+1.381452188" Apr 30 00:44:40.531687 kubelet[3658]: I0430 00:44:40.531593 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-157" podStartSLOduration=1.531570229 podStartE2EDuration="1.531570229s" podCreationTimestamp="2025-04-30 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:40.512858545 +0000 UTC m=+1.382498876" watchObservedRunningTime="2025-04-30 00:44:40.531570229 +0000 UTC m=+1.401210548" Apr 30 00:44:43.564014 sudo[2493]: pam_unix(sudo:session): session closed for user root Apr 30 00:44:43.601953 sshd[2489]: pam_unix(sshd:session): session closed for user core Apr 30 00:44:43.609153 systemd[1]: sshd@6-172.31.27.157:22-147.75.109.163:41746.service: Deactivated successfully. Apr 30 00:44:43.618334 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:44:43.620131 systemd-logind[2113]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:44:43.622510 systemd-logind[2113]: Removed session 7. Apr 30 00:44:52.756696 kubelet[3658]: I0430 00:44:52.756283 3658 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:44:52.760842 containerd[2141]: time="2025-04-30T00:44:52.759563750Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:44:52.761470 kubelet[3658]: I0430 00:44:52.760072 3658 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:44:53.486064 kubelet[3658]: I0430 00:44:53.485553 3658 topology_manager.go:215] "Topology Admit Handler" podUID="18fbfb19-53e9-4f18-b10f-5af24727d2e3" podNamespace="kube-system" podName="kube-proxy-xt6bs" Apr 30 00:44:53.515713 kubelet[3658]: I0430 00:44:53.513862 3658 topology_manager.go:215] "Topology Admit Handler" podUID="d9f15b7b-10de-45df-ac12-c941ea2d59ec" podNamespace="kube-system" podName="cilium-v5db4" Apr 30 00:44:53.541624 kubelet[3658]: I0430 00:44:53.541531 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-bpf-maps\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.541829 kubelet[3658]: I0430 00:44:53.541626 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-hostproc\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.543522 kubelet[3658]: I0430 00:44:53.543485 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-xtables-lock\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.543825 kubelet[3658]: I0430 00:44:53.543793 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9f15b7b-10de-45df-ac12-c941ea2d59ec-clustermesh-secrets\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.544031 kubelet[3658]: I0430 00:44:53.544006 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-config-path\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.544216 kubelet[3658]: I0430 00:44:53.544192 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18fbfb19-53e9-4f18-b10f-5af24727d2e3-lib-modules\") pod \"kube-proxy-xt6bs\" (UID: \"18fbfb19-53e9-4f18-b10f-5af24727d2e3\") " pod="kube-system/kube-proxy-xt6bs" Apr 30 00:44:53.544708 kubelet[3658]: I0430 00:44:53.544347 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-lib-modules\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.544708 kubelet[3658]: I0430 00:44:53.544402 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-host-proc-sys-kernel\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.544708 kubelet[3658]: I0430 00:44:53.544473 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/18fbfb19-53e9-4f18-b10f-5af24727d2e3-kube-proxy\") pod \"kube-proxy-xt6bs\" (UID: \"18fbfb19-53e9-4f18-b10f-5af24727d2e3\") " pod="kube-system/kube-proxy-xt6bs" Apr 30 00:44:53.544708 kubelet[3658]: I0430 00:44:53.544508 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-etc-cni-netd\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.544708 kubelet[3658]: I0430 00:44:53.544544 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18fbfb19-53e9-4f18-b10f-5af24727d2e3-xtables-lock\") pod \"kube-proxy-xt6bs\" (UID: \"18fbfb19-53e9-4f18-b10f-5af24727d2e3\") " pod="kube-system/kube-proxy-xt6bs" Apr 30 00:44:53.544708 kubelet[3658]: I0430 00:44:53.544578 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-cgroup\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.545132 kubelet[3658]: I0430 00:44:53.544615 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7gjq\" (UniqueName: \"kubernetes.io/projected/d9f15b7b-10de-45df-ac12-c941ea2d59ec-kube-api-access-k7gjq\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.545132 kubelet[3658]: I0430 00:44:53.544649 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9f15b7b-10de-45df-ac12-c941ea2d59ec-hubble-tls\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.545422 kubelet[3658]: I0430 00:44:53.545311 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-run\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.545422 kubelet[3658]: I0430 00:44:53.545396 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cni-path\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.545543 kubelet[3658]: I0430 00:44:53.545469 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-host-proc-sys-net\") pod \"cilium-v5db4\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " pod="kube-system/cilium-v5db4" Apr 30 00:44:53.545612 kubelet[3658]: I0430 00:44:53.545540 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wpfb\" (UniqueName: \"kubernetes.io/projected/18fbfb19-53e9-4f18-b10f-5af24727d2e3-kube-api-access-9wpfb\") pod \"kube-proxy-xt6bs\" (UID: \"18fbfb19-53e9-4f18-b10f-5af24727d2e3\") " pod="kube-system/kube-proxy-xt6bs" Apr 30 00:44:53.812766 containerd[2141]: time="2025-04-30T00:44:53.810899355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xt6bs,Uid:18fbfb19-53e9-4f18-b10f-5af24727d2e3,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:53.843744 containerd[2141]: time="2025-04-30T00:44:53.843418467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5db4,Uid:d9f15b7b-10de-45df-ac12-c941ea2d59ec,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:53.967257 containerd[2141]: time="2025-04-30T00:44:53.961589847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:53.967257 containerd[2141]: time="2025-04-30T00:44:53.966157300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:53.967257 containerd[2141]: time="2025-04-30T00:44:53.966194656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:53.967257 containerd[2141]: time="2025-04-30T00:44:53.966402460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:53.974498 kubelet[3658]: I0430 00:44:53.972562 3658 topology_manager.go:215] "Topology Admit Handler" podUID="8ecdb89e-d0b5-4e19-ad77-b200634425d7" podNamespace="kube-system" podName="cilium-operator-599987898-25vp7" Apr 30 00:44:54.050457 kubelet[3658]: I0430 00:44:54.049466 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ecdb89e-d0b5-4e19-ad77-b200634425d7-cilium-config-path\") pod \"cilium-operator-599987898-25vp7\" (UID: \"8ecdb89e-d0b5-4e19-ad77-b200634425d7\") " pod="kube-system/cilium-operator-599987898-25vp7" Apr 30 00:44:54.050457 kubelet[3658]: I0430 00:44:54.049532 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ll4l\" (UniqueName: \"kubernetes.io/projected/8ecdb89e-d0b5-4e19-ad77-b200634425d7-kube-api-access-8ll4l\") pod \"cilium-operator-599987898-25vp7\" (UID: \"8ecdb89e-d0b5-4e19-ad77-b200634425d7\") " pod="kube-system/cilium-operator-599987898-25vp7" Apr 30 00:44:54.066243 containerd[2141]: time="2025-04-30T00:44:54.065609244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:54.071958 containerd[2141]: time="2025-04-30T00:44:54.071855796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:54.072110 containerd[2141]: time="2025-04-30T00:44:54.071943240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:54.073722 containerd[2141]: time="2025-04-30T00:44:54.073315272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:54.125532 containerd[2141]: time="2025-04-30T00:44:54.125456316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xt6bs,Uid:18fbfb19-53e9-4f18-b10f-5af24727d2e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"63aa8c80b63310928fee8b00ac054d51a61e518c6111167059d692be2df1e7d3\"" Apr 30 00:44:54.136523 containerd[2141]: time="2025-04-30T00:44:54.136280256Z" level=info msg="CreateContainer within sandbox \"63aa8c80b63310928fee8b00ac054d51a61e518c6111167059d692be2df1e7d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:44:54.175237 containerd[2141]: time="2025-04-30T00:44:54.174127261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5db4,Uid:d9f15b7b-10de-45df-ac12-c941ea2d59ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\"" Apr 30 00:44:54.179633 containerd[2141]: time="2025-04-30T00:44:54.178743565Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:44:54.187518 containerd[2141]: time="2025-04-30T00:44:54.187455613Z" level=info msg="CreateContainer within sandbox \"63aa8c80b63310928fee8b00ac054d51a61e518c6111167059d692be2df1e7d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b29c6f94cc10643706eb9eee1b539bc730af265d1419adec40fb2fd655d9b8e\"" Apr 30 00:44:54.190656 containerd[2141]: time="2025-04-30T00:44:54.190596229Z" level=info msg="StartContainer for \"0b29c6f94cc10643706eb9eee1b539bc730af265d1419adec40fb2fd655d9b8e\"" Apr 30 00:44:54.291971 containerd[2141]: time="2025-04-30T00:44:54.291866437Z" level=info msg="StartContainer for \"0b29c6f94cc10643706eb9eee1b539bc730af265d1419adec40fb2fd655d9b8e\" returns successfully" Apr 30 00:44:54.309097 containerd[2141]: time="2025-04-30T00:44:54.309007081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-25vp7,Uid:8ecdb89e-d0b5-4e19-ad77-b200634425d7,Namespace:kube-system,Attempt:0,}" Apr 30 00:44:54.375037 containerd[2141]: time="2025-04-30T00:44:54.372812318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:44:54.375037 containerd[2141]: time="2025-04-30T00:44:54.373003970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:44:54.375037 containerd[2141]: time="2025-04-30T00:44:54.373065074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:54.375037 containerd[2141]: time="2025-04-30T00:44:54.373791374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:44:54.496508 containerd[2141]: time="2025-04-30T00:44:54.496413410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-25vp7,Uid:8ecdb89e-d0b5-4e19-ad77-b200634425d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\"" Apr 30 00:44:59.398778 kubelet[3658]: I0430 00:44:59.398689 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xt6bs" podStartSLOduration=6.398646438 podStartE2EDuration="6.398646438s" podCreationTimestamp="2025-04-30 00:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:44:54.51553889 +0000 UTC m=+15.385179221" watchObservedRunningTime="2025-04-30 00:44:59.398646438 +0000 UTC m=+20.268286769" Apr 30 00:45:04.019046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268717203.mount: Deactivated successfully. Apr 30 00:45:06.509374 containerd[2141]: time="2025-04-30T00:45:06.509289182Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:06.511434 containerd[2141]: time="2025-04-30T00:45:06.511327286Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 00:45:06.512887 containerd[2141]: time="2025-04-30T00:45:06.512789810Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:06.517349 containerd[2141]: time="2025-04-30T00:45:06.517261190Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.338443741s" Apr 30 00:45:06.517349 containerd[2141]: time="2025-04-30T00:45:06.517342730Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 00:45:06.521759 containerd[2141]: time="2025-04-30T00:45:06.521697902Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:45:06.524254 containerd[2141]: time="2025-04-30T00:45:06.524199182Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:45:06.543087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266235225.mount: Deactivated successfully. Apr 30 00:45:06.546796 containerd[2141]: time="2025-04-30T00:45:06.546725354Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\"" Apr 30 00:45:06.549302 containerd[2141]: time="2025-04-30T00:45:06.549127934Z" level=info msg="StartContainer for \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\"" Apr 30 00:45:06.607531 systemd[1]: run-containerd-runc-k8s.io-ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2-runc.rrq4PW.mount: Deactivated successfully. Apr 30 00:45:06.660097 containerd[2141]: time="2025-04-30T00:45:06.659834931Z" level=info msg="StartContainer for \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\" returns successfully" Apr 30 00:45:07.536591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2-rootfs.mount: Deactivated successfully. Apr 30 00:45:07.881957 containerd[2141]: time="2025-04-30T00:45:07.881768141Z" level=info msg="shim disconnected" id=ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2 namespace=k8s.io Apr 30 00:45:07.881957 containerd[2141]: time="2025-04-30T00:45:07.881846381Z" level=warning msg="cleaning up after shim disconnected" id=ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2 namespace=k8s.io Apr 30 00:45:07.881957 containerd[2141]: time="2025-04-30T00:45:07.881868161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:08.566746 containerd[2141]: time="2025-04-30T00:45:08.566515912Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:45:08.601763 containerd[2141]: time="2025-04-30T00:45:08.598176424Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\"" Apr 30 00:45:08.605001 containerd[2141]: time="2025-04-30T00:45:08.604940368Z" level=info msg="StartContainer for \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\"" Apr 30 00:45:08.714995 containerd[2141]: time="2025-04-30T00:45:08.714592637Z" level=info msg="StartContainer for \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\" returns successfully" Apr 30 00:45:08.734811 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:45:08.735429 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:08.735558 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:45:08.750259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:45:08.786945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:45:08.793846 containerd[2141]: time="2025-04-30T00:45:08.793556909Z" level=info msg="shim disconnected" id=6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2 namespace=k8s.io Apr 30 00:45:08.793846 containerd[2141]: time="2025-04-30T00:45:08.793747541Z" level=warning msg="cleaning up after shim disconnected" id=6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2 namespace=k8s.io Apr 30 00:45:08.793846 containerd[2141]: time="2025-04-30T00:45:08.793791557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:09.574713 containerd[2141]: time="2025-04-30T00:45:09.574336421Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:45:09.591652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2-rootfs.mount: Deactivated successfully. Apr 30 00:45:09.614622 containerd[2141]: time="2025-04-30T00:45:09.614550029Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\"" Apr 30 00:45:09.617045 containerd[2141]: time="2025-04-30T00:45:09.615713165Z" level=info msg="StartContainer for \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\"" Apr 30 00:45:09.735277 containerd[2141]: time="2025-04-30T00:45:09.735221706Z" level=info msg="StartContainer for \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\" returns successfully" Apr 30 00:45:09.791283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93-rootfs.mount: Deactivated successfully. Apr 30 00:45:09.793702 kubelet[3658]: E0430 00:45:09.793543 3658 cadvisor_stats_provider.go:500] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/burstable/podd9f15b7b-10de-45df-ac12-c941ea2d59ec/dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\": RecentStats: unable to find data in memory cache]" Apr 30 00:45:09.802273 containerd[2141]: time="2025-04-30T00:45:09.802163202Z" level=info msg="shim disconnected" id=dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93 namespace=k8s.io Apr 30 00:45:09.802840 containerd[2141]: time="2025-04-30T00:45:09.802266318Z" level=warning msg="cleaning up after shim disconnected" id=dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93 namespace=k8s.io Apr 30 00:45:09.802840 containerd[2141]: time="2025-04-30T00:45:09.802311222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:10.584518 containerd[2141]: time="2025-04-30T00:45:10.584461638Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:45:10.644588 containerd[2141]: time="2025-04-30T00:45:10.641292966Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\"" Apr 30 00:45:10.652345 containerd[2141]: time="2025-04-30T00:45:10.651885966Z" level=info msg="StartContainer for \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\"" Apr 30 00:45:10.716320 systemd[1]: run-containerd-runc-k8s.io-73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c-runc.LXWksE.mount: Deactivated successfully. Apr 30 00:45:10.765304 containerd[2141]: time="2025-04-30T00:45:10.765140107Z" level=info msg="StartContainer for \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\" returns successfully" Apr 30 00:45:10.802492 containerd[2141]: time="2025-04-30T00:45:10.802242247Z" level=info msg="shim disconnected" id=73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c namespace=k8s.io Apr 30 00:45:10.802492 containerd[2141]: time="2025-04-30T00:45:10.802317247Z" level=warning msg="cleaning up after shim disconnected" id=73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c namespace=k8s.io Apr 30 00:45:10.802492 containerd[2141]: time="2025-04-30T00:45:10.802339243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:45:11.594588 containerd[2141]: time="2025-04-30T00:45:11.594509587Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:45:11.615316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c-rootfs.mount: Deactivated successfully. Apr 30 00:45:11.625550 containerd[2141]: time="2025-04-30T00:45:11.625413235Z" level=info msg="CreateContainer within sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\"" Apr 30 00:45:11.628870 containerd[2141]: time="2025-04-30T00:45:11.627829207Z" level=info msg="StartContainer for \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\"" Apr 30 00:45:11.740482 containerd[2141]: time="2025-04-30T00:45:11.740374820Z" level=info msg="StartContainer for \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\" returns successfully" Apr 30 00:45:11.970706 kubelet[3658]: I0430 00:45:11.968866 3658 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:45:12.052889 kubelet[3658]: I0430 00:45:12.052762 3658 topology_manager.go:215] "Topology Admit Handler" podUID="109625ca-43b8-45e4-a990-d1b8b848de37" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zdsgj" Apr 30 00:45:12.055980 kubelet[3658]: I0430 00:45:12.055917 3658 topology_manager.go:215] "Topology Admit Handler" podUID="c0ea50ef-a273-482c-a51b-6f5dd2292798" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7qd4m" Apr 30 00:45:12.088241 kubelet[3658]: I0430 00:45:12.088179 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/109625ca-43b8-45e4-a990-d1b8b848de37-config-volume\") pod \"coredns-7db6d8ff4d-zdsgj\" (UID: \"109625ca-43b8-45e4-a990-d1b8b848de37\") " pod="kube-system/coredns-7db6d8ff4d-zdsgj" Apr 30 00:45:12.088407 kubelet[3658]: I0430 00:45:12.088249 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0ea50ef-a273-482c-a51b-6f5dd2292798-config-volume\") pod \"coredns-7db6d8ff4d-7qd4m\" (UID: \"c0ea50ef-a273-482c-a51b-6f5dd2292798\") " pod="kube-system/coredns-7db6d8ff4d-7qd4m" Apr 30 00:45:12.088407 kubelet[3658]: I0430 00:45:12.088303 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgl5\" (UniqueName: \"kubernetes.io/projected/c0ea50ef-a273-482c-a51b-6f5dd2292798-kube-api-access-rkgl5\") pod \"coredns-7db6d8ff4d-7qd4m\" (UID: \"c0ea50ef-a273-482c-a51b-6f5dd2292798\") " pod="kube-system/coredns-7db6d8ff4d-7qd4m" Apr 30 00:45:12.088407 kubelet[3658]: I0430 00:45:12.088343 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h29l8\" (UniqueName: \"kubernetes.io/projected/109625ca-43b8-45e4-a990-d1b8b848de37-kube-api-access-h29l8\") pod \"coredns-7db6d8ff4d-zdsgj\" (UID: \"109625ca-43b8-45e4-a990-d1b8b848de37\") " pod="kube-system/coredns-7db6d8ff4d-zdsgj" Apr 30 00:45:12.414188 containerd[2141]: time="2025-04-30T00:45:12.411481807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zdsgj,Uid:109625ca-43b8-45e4-a990-d1b8b848de37,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:12.414360 containerd[2141]: time="2025-04-30T00:45:12.412991311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7qd4m,Uid:c0ea50ef-a273-482c-a51b-6f5dd2292798,Namespace:kube-system,Attempt:0,}" Apr 30 00:45:13.351348 containerd[2141]: time="2025-04-30T00:45:13.351208964Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:13.353460 containerd[2141]: time="2025-04-30T00:45:13.353375108Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 00:45:13.355780 containerd[2141]: time="2025-04-30T00:45:13.355701860Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:45:13.360173 containerd[2141]: time="2025-04-30T00:45:13.360114080Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.838347754s" Apr 30 00:45:13.360505 containerd[2141]: time="2025-04-30T00:45:13.360177428Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 00:45:13.365131 containerd[2141]: time="2025-04-30T00:45:13.365061752Z" level=info msg="CreateContainer within sandbox \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:45:13.389046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948702884.mount: Deactivated successfully. Apr 30 00:45:13.396137 containerd[2141]: time="2025-04-30T00:45:13.396072056Z" level=info msg="CreateContainer within sandbox \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\"" Apr 30 00:45:13.397232 containerd[2141]: time="2025-04-30T00:45:13.397165868Z" level=info msg="StartContainer for \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\"" Apr 30 00:45:13.490955 containerd[2141]: time="2025-04-30T00:45:13.490836812Z" level=info msg="StartContainer for \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\" returns successfully" Apr 30 00:45:13.689952 kubelet[3658]: I0430 00:45:13.686021 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v5db4" podStartSLOduration=8.343496408 podStartE2EDuration="20.685997709s" podCreationTimestamp="2025-04-30 00:44:53 +0000 UTC" firstStartedPulling="2025-04-30 00:44:54.177372745 +0000 UTC m=+15.047013076" lastFinishedPulling="2025-04-30 00:45:06.519874046 +0000 UTC m=+27.389514377" observedRunningTime="2025-04-30 00:45:12.67964288 +0000 UTC m=+33.549283211" watchObservedRunningTime="2025-04-30 00:45:13.685997709 +0000 UTC m=+34.555638064" Apr 30 00:45:16.479933 systemd-networkd[1686]: cilium_host: Link UP Apr 30 00:45:16.480376 systemd-networkd[1686]: cilium_net: Link UP Apr 30 00:45:16.481005 systemd-networkd[1686]: cilium_net: Gained carrier Apr 30 00:45:16.481352 systemd-networkd[1686]: cilium_host: Gained carrier Apr 30 00:45:16.481588 systemd-networkd[1686]: cilium_net: Gained IPv6LL Apr 30 00:45:16.482052 systemd-networkd[1686]: cilium_host: Gained IPv6LL Apr 30 00:45:16.490566 (udev-worker)[4475]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:16.490745 (udev-worker)[4476]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:16.663962 systemd-networkd[1686]: cilium_vxlan: Link UP Apr 30 00:45:16.663981 systemd-networkd[1686]: cilium_vxlan: Gained carrier Apr 30 00:45:17.144803 kernel: NET: Registered PF_ALG protocol family Apr 30 00:45:17.815941 systemd-networkd[1686]: cilium_vxlan: Gained IPv6LL Apr 30 00:45:18.582373 (udev-worker)[4487]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:18.592617 systemd-networkd[1686]: lxc_health: Link UP Apr 30 00:45:18.603285 systemd[1]: Started sshd@7-172.31.27.157:22-147.75.109.163:53106.service - OpenSSH per-connection server daemon (147.75.109.163:53106). Apr 30 00:45:18.613343 systemd-networkd[1686]: lxc_health: Gained carrier Apr 30 00:45:18.957683 sshd[4803]: Accepted publickey for core from 147.75.109.163 port 53106 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:18.965185 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:18.988047 systemd-logind[2113]: New session 8 of user core. Apr 30 00:45:18.995265 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:45:19.228197 systemd-networkd[1686]: lxc8ec42008d6f9: Link UP Apr 30 00:45:19.252319 kernel: eth0: renamed from tmpebecf Apr 30 00:45:19.267795 systemd-networkd[1686]: lxc8ec42008d6f9: Gained carrier Apr 30 00:45:19.272144 systemd-networkd[1686]: lxc52993fef25f0: Link UP Apr 30 00:45:19.315838 kernel: eth0: renamed from tmp19303 Apr 30 00:45:19.322624 (udev-worker)[4490]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:45:19.330622 systemd-networkd[1686]: lxc52993fef25f0: Gained carrier Apr 30 00:45:19.542990 sshd[4803]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:19.555918 systemd[1]: sshd@7-172.31.27.157:22-147.75.109.163:53106.service: Deactivated successfully. Apr 30 00:45:19.558386 systemd-logind[2113]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:45:19.570256 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:45:19.579226 systemd-logind[2113]: Removed session 8. Apr 30 00:45:19.907786 kubelet[3658]: I0430 00:45:19.905923 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-25vp7" podStartSLOduration=8.044314134 podStartE2EDuration="26.905898892s" podCreationTimestamp="2025-04-30 00:44:53 +0000 UTC" firstStartedPulling="2025-04-30 00:44:54.499500662 +0000 UTC m=+15.369140969" lastFinishedPulling="2025-04-30 00:45:13.36108542 +0000 UTC m=+34.230725727" observedRunningTime="2025-04-30 00:45:13.686889777 +0000 UTC m=+34.556530108" watchObservedRunningTime="2025-04-30 00:45:19.905898892 +0000 UTC m=+40.775539235" Apr 30 00:45:20.439976 systemd-networkd[1686]: lxc8ec42008d6f9: Gained IPv6LL Apr 30 00:45:20.631995 systemd-networkd[1686]: lxc_health: Gained IPv6LL Apr 30 00:45:20.695867 systemd-networkd[1686]: lxc52993fef25f0: Gained IPv6LL Apr 30 00:45:23.176759 ntpd[2089]: Listen normally on 6 cilium_host 192.168.0.55:123 Apr 30 00:45:23.176902 ntpd[2089]: Listen normally on 7 cilium_net [fe80::54fd:65ff:fee9:cef1%4]:123 Apr 30 00:45:23.177423 ntpd[2089]: 30 Apr 00:45:23 ntpd[2089]: Listen normally on 6 cilium_host 192.168.0.55:123 Apr 30 00:45:23.177423 ntpd[2089]: 30 Apr 00:45:23 ntpd[2089]: Listen normally on 7 cilium_net [fe80::54fd:65ff:fee9:cef1%4]:123 Apr 30 00:45:23.177423 ntpd[2089]: 30 Apr 00:45:23 ntpd[2089]: Listen normally on 8 cilium_host [fe80::381d:59ff:fe32:d812%5]:123 Apr 30 00:45:23.177423 ntpd[2089]: 30 Apr 00:45:23 ntpd[2089]: Listen normally on 9 cilium_vxlan [fe80::98df:c0ff:fe01:f51c%6]:123 Apr 30 00:45:23.177423 ntpd[2089]: 30 Apr 00:45:23 ntpd[2089]: Listen normally on 10 lxc_health [fe80::a863:4dff:fef8:dfe2%8]:123 Apr 30 00:45:23.177423 ntpd[2089]: 30 Apr 00:45:23 ntpd[2089]: Listen normally on 11 lxc8ec42008d6f9 [fe80::3cbc:68ff:fea7:7cc0%10]:123 Apr 30 00:45:23.177423 ntpd[2089]: 30 Apr 00:45:23 ntpd[2089]: Listen normally on 12 lxc52993fef25f0 [fe80::f0d0:a7ff:fe76:a624%12]:123 Apr 30 00:45:23.176989 ntpd[2089]: Listen normally on 8 cilium_host [fe80::381d:59ff:fe32:d812%5]:123 Apr 30 00:45:23.177058 ntpd[2089]: Listen normally on 9 cilium_vxlan [fe80::98df:c0ff:fe01:f51c%6]:123 Apr 30 00:45:23.177125 ntpd[2089]: Listen normally on 10 lxc_health [fe80::a863:4dff:fef8:dfe2%8]:123 Apr 30 00:45:23.177193 ntpd[2089]: Listen normally on 11 lxc8ec42008d6f9 [fe80::3cbc:68ff:fea7:7cc0%10]:123 Apr 30 00:45:23.177260 ntpd[2089]: Listen normally on 12 lxc52993fef25f0 [fe80::f0d0:a7ff:fe76:a624%12]:123 Apr 30 00:45:24.586254 systemd[1]: Started sshd@8-172.31.27.157:22-147.75.109.163:53114.service - OpenSSH per-connection server daemon (147.75.109.163:53114). Apr 30 00:45:24.858631 sshd[4853]: Accepted publickey for core from 147.75.109.163 port 53114 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:24.861520 sshd[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:24.872296 systemd-logind[2113]: New session 9 of user core. Apr 30 00:45:24.883307 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:45:25.215079 sshd[4853]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:25.228375 systemd[1]: sshd@8-172.31.27.157:22-147.75.109.163:53114.service: Deactivated successfully. Apr 30 00:45:25.241227 systemd-logind[2113]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:45:25.242718 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:45:25.249918 systemd-logind[2113]: Removed session 9. Apr 30 00:45:27.711261 containerd[2141]: time="2025-04-30T00:45:27.711056831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:27.714608 containerd[2141]: time="2025-04-30T00:45:27.711170627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:27.714608 containerd[2141]: time="2025-04-30T00:45:27.712823375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:27.716723 containerd[2141]: time="2025-04-30T00:45:27.715591811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:27.855703 containerd[2141]: time="2025-04-30T00:45:27.847785540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:45:27.855703 containerd[2141]: time="2025-04-30T00:45:27.847887600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:45:27.855703 containerd[2141]: time="2025-04-30T00:45:27.847938852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:27.855703 containerd[2141]: time="2025-04-30T00:45:27.848107932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:45:27.927371 containerd[2141]: time="2025-04-30T00:45:27.927291420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zdsgj,Uid:109625ca-43b8-45e4-a990-d1b8b848de37,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebecf81cdcaaa3ae73a013f72881455862a76b1b3b43c2c5856baa03ced4d1f2\"" Apr 30 00:45:27.948193 containerd[2141]: time="2025-04-30T00:45:27.948003504Z" level=info msg="CreateContainer within sandbox \"ebecf81cdcaaa3ae73a013f72881455862a76b1b3b43c2c5856baa03ced4d1f2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:45:28.003141 containerd[2141]: time="2025-04-30T00:45:28.002900973Z" level=info msg="CreateContainer within sandbox \"ebecf81cdcaaa3ae73a013f72881455862a76b1b3b43c2c5856baa03ced4d1f2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12841a643df7b31e9f7ce11afa3cc6e18bfea6157e5c8e19f56dbf0dbc36093c\"" Apr 30 00:45:28.013800 containerd[2141]: time="2025-04-30T00:45:28.012829569Z" level=info msg="StartContainer for \"12841a643df7b31e9f7ce11afa3cc6e18bfea6157e5c8e19f56dbf0dbc36093c\"" Apr 30 00:45:28.053203 containerd[2141]: time="2025-04-30T00:45:28.053150769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7qd4m,Uid:c0ea50ef-a273-482c-a51b-6f5dd2292798,Namespace:kube-system,Attempt:0,} returns sandbox id \"19303e92fb1fcc6e2d61f1c8215002a5bf439d6166f0cab87d930d2590576772\"" Apr 30 00:45:28.061051 containerd[2141]: time="2025-04-30T00:45:28.060818757Z" level=info msg="CreateContainer within sandbox \"19303e92fb1fcc6e2d61f1c8215002a5bf439d6166f0cab87d930d2590576772\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:45:28.111996 containerd[2141]: time="2025-04-30T00:45:28.111024357Z" level=info msg="CreateContainer within sandbox \"19303e92fb1fcc6e2d61f1c8215002a5bf439d6166f0cab87d930d2590576772\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df57f64801783ecfc67892f971d358f692413bfb12956e694fd83e025e3e2c69\"" Apr 30 00:45:28.112734 containerd[2141]: time="2025-04-30T00:45:28.112691769Z" level=info msg="StartContainer for \"df57f64801783ecfc67892f971d358f692413bfb12956e694fd83e025e3e2c69\"" Apr 30 00:45:28.189108 containerd[2141]: time="2025-04-30T00:45:28.189001533Z" level=info msg="StartContainer for \"12841a643df7b31e9f7ce11afa3cc6e18bfea6157e5c8e19f56dbf0dbc36093c\" returns successfully" Apr 30 00:45:28.246182 containerd[2141]: time="2025-04-30T00:45:28.245891602Z" level=info msg="StartContainer for \"df57f64801783ecfc67892f971d358f692413bfb12956e694fd83e025e3e2c69\" returns successfully" Apr 30 00:45:28.759824 kubelet[3658]: I0430 00:45:28.757617 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zdsgj" podStartSLOduration=35.7575936 podStartE2EDuration="35.7575936s" podCreationTimestamp="2025-04-30 00:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:28.725788752 +0000 UTC m=+49.595429167" watchObservedRunningTime="2025-04-30 00:45:28.7575936 +0000 UTC m=+49.627234099" Apr 30 00:45:28.790751 kubelet[3658]: I0430 00:45:28.789148 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7qd4m" podStartSLOduration=35.789121188 podStartE2EDuration="35.789121188s" podCreationTimestamp="2025-04-30 00:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:45:28.785182104 +0000 UTC m=+49.654822447" watchObservedRunningTime="2025-04-30 00:45:28.789121188 +0000 UTC m=+49.658761519" Apr 30 00:45:30.258272 systemd[1]: Started sshd@9-172.31.27.157:22-147.75.109.163:45238.service - OpenSSH per-connection server daemon (147.75.109.163:45238). Apr 30 00:45:30.522192 sshd[5038]: Accepted publickey for core from 147.75.109.163 port 45238 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:30.524881 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:30.533236 systemd-logind[2113]: New session 10 of user core. Apr 30 00:45:30.539485 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:45:30.832358 sshd[5038]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:30.841496 systemd[1]: sshd@9-172.31.27.157:22-147.75.109.163:45238.service: Deactivated successfully. Apr 30 00:45:30.851040 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:45:30.852511 systemd-logind[2113]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:45:30.855117 systemd-logind[2113]: Removed session 10. Apr 30 00:45:35.881248 systemd[1]: Started sshd@10-172.31.27.157:22-147.75.109.163:45250.service - OpenSSH per-connection server daemon (147.75.109.163:45250). Apr 30 00:45:36.140336 sshd[5052]: Accepted publickey for core from 147.75.109.163 port 45250 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:36.143057 sshd[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:36.150959 systemd-logind[2113]: New session 11 of user core. Apr 30 00:45:36.158193 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:45:36.447415 sshd[5052]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:36.453525 systemd[1]: sshd@10-172.31.27.157:22-147.75.109.163:45250.service: Deactivated successfully. Apr 30 00:45:36.462494 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:45:36.464130 systemd-logind[2113]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:45:36.466523 systemd-logind[2113]: Removed session 11. Apr 30 00:45:41.493163 systemd[1]: Started sshd@11-172.31.27.157:22-147.75.109.163:59450.service - OpenSSH per-connection server daemon (147.75.109.163:59450). Apr 30 00:45:41.758466 sshd[5069]: Accepted publickey for core from 147.75.109.163 port 59450 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:41.761457 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:41.771024 systemd-logind[2113]: New session 12 of user core. Apr 30 00:45:41.777161 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:45:42.073985 sshd[5069]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:42.083027 systemd[1]: sshd@11-172.31.27.157:22-147.75.109.163:59450.service: Deactivated successfully. Apr 30 00:45:42.088652 systemd-logind[2113]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:45:42.090334 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:45:42.093108 systemd-logind[2113]: Removed session 12. Apr 30 00:45:42.122280 systemd[1]: Started sshd@12-172.31.27.157:22-147.75.109.163:59452.service - OpenSSH per-connection server daemon (147.75.109.163:59452). Apr 30 00:45:42.377379 sshd[5084]: Accepted publickey for core from 147.75.109.163 port 59452 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:42.380256 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:42.388181 systemd-logind[2113]: New session 13 of user core. Apr 30 00:45:42.394890 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:45:42.760383 sshd[5084]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:42.774089 systemd[1]: sshd@12-172.31.27.157:22-147.75.109.163:59452.service: Deactivated successfully. Apr 30 00:45:42.777350 systemd-logind[2113]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:45:42.794887 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:45:42.819783 systemd-logind[2113]: Removed session 13. Apr 30 00:45:42.830159 systemd[1]: Started sshd@13-172.31.27.157:22-147.75.109.163:59456.service - OpenSSH per-connection server daemon (147.75.109.163:59456). Apr 30 00:45:43.140422 sshd[5096]: Accepted publickey for core from 147.75.109.163 port 59456 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:43.142440 sshd[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:43.151183 systemd-logind[2113]: New session 14 of user core. Apr 30 00:45:43.162335 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:45:43.453177 sshd[5096]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:43.461366 systemd[1]: sshd@13-172.31.27.157:22-147.75.109.163:59456.service: Deactivated successfully. Apr 30 00:45:43.468418 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:45:43.468447 systemd-logind[2113]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:45:43.473282 systemd-logind[2113]: Removed session 14. Apr 30 00:45:48.499194 systemd[1]: Started sshd@14-172.31.27.157:22-147.75.109.163:42464.service - OpenSSH per-connection server daemon (147.75.109.163:42464). Apr 30 00:45:48.769705 sshd[5112]: Accepted publickey for core from 147.75.109.163 port 42464 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:48.772121 sshd[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:48.779954 systemd-logind[2113]: New session 15 of user core. Apr 30 00:45:48.785382 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:45:49.086010 sshd[5112]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:49.093157 systemd-logind[2113]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:45:49.094383 systemd[1]: sshd@14-172.31.27.157:22-147.75.109.163:42464.service: Deactivated successfully. Apr 30 00:45:49.099552 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:45:49.102398 systemd-logind[2113]: Removed session 15. Apr 30 00:45:54.133145 systemd[1]: Started sshd@15-172.31.27.157:22-147.75.109.163:42472.service - OpenSSH per-connection server daemon (147.75.109.163:42472). Apr 30 00:45:54.391042 sshd[5126]: Accepted publickey for core from 147.75.109.163 port 42472 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:45:54.393773 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:45:54.401347 systemd-logind[2113]: New session 16 of user core. Apr 30 00:45:54.413138 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:45:54.700058 sshd[5126]: pam_unix(sshd:session): session closed for user core Apr 30 00:45:54.706537 systemd[1]: sshd@15-172.31.27.157:22-147.75.109.163:42472.service: Deactivated successfully. Apr 30 00:45:54.714473 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:45:54.716299 systemd-logind[2113]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:45:54.718623 systemd-logind[2113]: Removed session 16. Apr 30 00:45:59.748400 systemd[1]: Started sshd@16-172.31.27.157:22-147.75.109.163:41756.service - OpenSSH per-connection server daemon (147.75.109.163:41756). Apr 30 00:46:00.004573 sshd[5142]: Accepted publickey for core from 147.75.109.163 port 41756 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:00.007301 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:00.016290 systemd-logind[2113]: New session 17 of user core. Apr 30 00:46:00.025400 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:46:00.317400 sshd[5142]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:00.324430 systemd[1]: sshd@16-172.31.27.157:22-147.75.109.163:41756.service: Deactivated successfully. Apr 30 00:46:00.333396 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:46:00.335283 systemd-logind[2113]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:46:00.337821 systemd-logind[2113]: Removed session 17. Apr 30 00:46:00.364577 systemd[1]: Started sshd@17-172.31.27.157:22-147.75.109.163:41758.service - OpenSSH per-connection server daemon (147.75.109.163:41758). Apr 30 00:46:00.633987 sshd[5156]: Accepted publickey for core from 147.75.109.163 port 41758 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:00.636914 sshd[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:00.646866 systemd-logind[2113]: New session 18 of user core. Apr 30 00:46:00.652248 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:46:01.013874 sshd[5156]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:01.021046 systemd[1]: sshd@17-172.31.27.157:22-147.75.109.163:41758.service: Deactivated successfully. Apr 30 00:46:01.026906 systemd-logind[2113]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:46:01.027784 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:46:01.031550 systemd-logind[2113]: Removed session 18. Apr 30 00:46:01.059568 systemd[1]: Started sshd@18-172.31.27.157:22-147.75.109.163:41766.service - OpenSSH per-connection server daemon (147.75.109.163:41766). Apr 30 00:46:01.323743 sshd[5168]: Accepted publickey for core from 147.75.109.163 port 41766 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:01.325902 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:01.333700 systemd-logind[2113]: New session 19 of user core. Apr 30 00:46:01.344334 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:46:03.930974 sshd[5168]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:03.942236 systemd[1]: sshd@18-172.31.27.157:22-147.75.109.163:41766.service: Deactivated successfully. Apr 30 00:46:03.948966 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:46:03.951063 systemd-logind[2113]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:46:03.953241 systemd-logind[2113]: Removed session 19. Apr 30 00:46:03.976581 systemd[1]: Started sshd@19-172.31.27.157:22-147.75.109.163:41768.service - OpenSSH per-connection server daemon (147.75.109.163:41768). Apr 30 00:46:04.242924 sshd[5186]: Accepted publickey for core from 147.75.109.163 port 41768 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:04.246303 sshd[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:04.256563 systemd-logind[2113]: New session 20 of user core. Apr 30 00:46:04.265367 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:46:04.788026 sshd[5186]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:04.796438 systemd-logind[2113]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:46:04.798626 systemd[1]: sshd@19-172.31.27.157:22-147.75.109.163:41768.service: Deactivated successfully. Apr 30 00:46:04.806568 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:46:04.808641 systemd-logind[2113]: Removed session 20. Apr 30 00:46:04.832175 systemd[1]: Started sshd@20-172.31.27.157:22-147.75.109.163:41782.service - OpenSSH per-connection server daemon (147.75.109.163:41782). Apr 30 00:46:05.102134 sshd[5198]: Accepted publickey for core from 147.75.109.163 port 41782 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:05.105355 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:05.113918 systemd-logind[2113]: New session 21 of user core. Apr 30 00:46:05.120169 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:46:05.410017 sshd[5198]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:05.416119 systemd[1]: sshd@20-172.31.27.157:22-147.75.109.163:41782.service: Deactivated successfully. Apr 30 00:46:05.425509 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:46:05.427977 systemd-logind[2113]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:46:05.430108 systemd-logind[2113]: Removed session 21. Apr 30 00:46:10.456198 systemd[1]: Started sshd@21-172.31.27.157:22-147.75.109.163:45922.service - OpenSSH per-connection server daemon (147.75.109.163:45922). Apr 30 00:46:10.719314 sshd[5212]: Accepted publickey for core from 147.75.109.163 port 45922 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:10.722183 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:10.732194 systemd-logind[2113]: New session 22 of user core. Apr 30 00:46:10.739249 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:46:11.032388 sshd[5212]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:11.039348 systemd[1]: sshd@21-172.31.27.157:22-147.75.109.163:45922.service: Deactivated successfully. Apr 30 00:46:11.046124 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:46:11.048949 systemd-logind[2113]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:46:11.051102 systemd-logind[2113]: Removed session 22. Apr 30 00:46:16.079240 systemd[1]: Started sshd@22-172.31.27.157:22-147.75.109.163:45928.service - OpenSSH per-connection server daemon (147.75.109.163:45928). Apr 30 00:46:16.335866 sshd[5229]: Accepted publickey for core from 147.75.109.163 port 45928 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:16.338699 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:16.348789 systemd-logind[2113]: New session 23 of user core. Apr 30 00:46:16.353483 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:46:16.641307 sshd[5229]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:16.649378 systemd[1]: sshd@22-172.31.27.157:22-147.75.109.163:45928.service: Deactivated successfully. Apr 30 00:46:16.655352 systemd-logind[2113]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:46:16.656110 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:46:16.659568 systemd-logind[2113]: Removed session 23. Apr 30 00:46:21.686487 systemd[1]: Started sshd@23-172.31.27.157:22-147.75.109.163:35640.service - OpenSSH per-connection server daemon (147.75.109.163:35640). Apr 30 00:46:21.961777 sshd[5243]: Accepted publickey for core from 147.75.109.163 port 35640 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:21.963224 sshd[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:21.974973 systemd-logind[2113]: New session 24 of user core. Apr 30 00:46:21.981881 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:46:22.266888 sshd[5243]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:22.273965 systemd[1]: sshd@23-172.31.27.157:22-147.75.109.163:35640.service: Deactivated successfully. Apr 30 00:46:22.279191 systemd-logind[2113]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:46:22.280636 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:46:22.284167 systemd-logind[2113]: Removed session 24. Apr 30 00:46:27.313170 systemd[1]: Started sshd@24-172.31.27.157:22-147.75.109.163:47764.service - OpenSSH per-connection server daemon (147.75.109.163:47764). Apr 30 00:46:27.581483 sshd[5259]: Accepted publickey for core from 147.75.109.163 port 47764 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:27.584270 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:27.595463 systemd-logind[2113]: New session 25 of user core. Apr 30 00:46:27.602308 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:46:27.889976 sshd[5259]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:27.898130 systemd-logind[2113]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:46:27.898412 systemd[1]: sshd@24-172.31.27.157:22-147.75.109.163:47764.service: Deactivated successfully. Apr 30 00:46:27.905396 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:46:27.906928 systemd-logind[2113]: Removed session 25. Apr 30 00:46:27.935159 systemd[1]: Started sshd@25-172.31.27.157:22-147.75.109.163:47780.service - OpenSSH per-connection server daemon (147.75.109.163:47780). Apr 30 00:46:28.210164 sshd[5273]: Accepted publickey for core from 147.75.109.163 port 47780 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:28.213560 sshd[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:28.221784 systemd-logind[2113]: New session 26 of user core. Apr 30 00:46:28.228608 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:46:30.569248 kubelet[3658]: E0430 00:46:30.568617 3658 configmap.go:199] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Apr 30 00:46:30.572555 kubelet[3658]: E0430 00:46:30.571492 3658 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-config-path podName:d9f15b7b-10de-45df-ac12-c941ea2d59ec nodeName:}" failed. No retries permitted until 2025-04-30 00:46:31.071448783 +0000 UTC m=+111.941089102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-config-path") pod "cilium-v5db4" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec") : configmap "cilium-config" not found Apr 30 00:46:30.595900 containerd[2141]: time="2025-04-30T00:46:30.595753055Z" level=info msg="StopContainer for \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\" with timeout 30 (s)" Apr 30 00:46:30.597847 containerd[2141]: time="2025-04-30T00:46:30.596753795Z" level=info msg="Stop container \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\" with signal terminated" Apr 30 00:46:30.658307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b-rootfs.mount: Deactivated successfully. Apr 30 00:46:30.675730 containerd[2141]: time="2025-04-30T00:46:30.675606780Z" level=info msg="shim disconnected" id=8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b namespace=k8s.io Apr 30 00:46:30.675730 containerd[2141]: time="2025-04-30T00:46:30.675722352Z" level=warning msg="cleaning up after shim disconnected" id=8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b namespace=k8s.io Apr 30 00:46:30.676938 containerd[2141]: time="2025-04-30T00:46:30.675746844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:30.701941 containerd[2141]: time="2025-04-30T00:46:30.701656212Z" level=info msg="StopContainer for \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\" returns successfully" Apr 30 00:46:30.703035 containerd[2141]: time="2025-04-30T00:46:30.702532272Z" level=info msg="StopPodSandbox for \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\"" Apr 30 00:46:30.703035 containerd[2141]: time="2025-04-30T00:46:30.702589464Z" level=info msg="Container to stop \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:30.708627 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade-shm.mount: Deactivated successfully. Apr 30 00:46:30.766524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade-rootfs.mount: Deactivated successfully. Apr 30 00:46:30.772767 containerd[2141]: time="2025-04-30T00:46:30.772538868Z" level=info msg="shim disconnected" id=ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade namespace=k8s.io Apr 30 00:46:30.773097 containerd[2141]: time="2025-04-30T00:46:30.773041944Z" level=warning msg="cleaning up after shim disconnected" id=ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade namespace=k8s.io Apr 30 00:46:30.773097 containerd[2141]: time="2025-04-30T00:46:30.773083932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:30.796830 containerd[2141]: time="2025-04-30T00:46:30.796769772Z" level=info msg="TearDown network for sandbox \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\" successfully" Apr 30 00:46:30.796830 containerd[2141]: time="2025-04-30T00:46:30.796822848Z" level=info msg="StopPodSandbox for \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\" returns successfully" Apr 30 00:46:30.870562 kubelet[3658]: I0430 00:46:30.869223 3658 scope.go:117] "RemoveContainer" containerID="8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b" Apr 30 00:46:30.882943 containerd[2141]: time="2025-04-30T00:46:30.882865885Z" level=info msg="RemoveContainer for \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\"" Apr 30 00:46:30.892377 containerd[2141]: time="2025-04-30T00:46:30.892157089Z" level=info msg="RemoveContainer for \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\" returns successfully" Apr 30 00:46:30.893031 kubelet[3658]: I0430 00:46:30.892898 3658 scope.go:117] "RemoveContainer" containerID="8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b" Apr 30 00:46:30.894718 containerd[2141]: time="2025-04-30T00:46:30.894463597Z" level=error msg="ContainerStatus for \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\": not found" Apr 30 00:46:30.894883 kubelet[3658]: E0430 00:46:30.894771 3658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\": not found" containerID="8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b" Apr 30 00:46:30.895047 kubelet[3658]: I0430 00:46:30.894827 3658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b"} err="failed to get container status \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cb2a6e89ae38ca71eeff3f1e96815d189eb17122f8981fb8915ecfd99cf5b3b\": not found" Apr 30 00:46:30.922707 containerd[2141]: time="2025-04-30T00:46:30.922402729Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:46:30.933199 containerd[2141]: time="2025-04-30T00:46:30.933058225Z" level=info msg="StopContainer for \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\" with timeout 2 (s)" Apr 30 00:46:30.933966 containerd[2141]: time="2025-04-30T00:46:30.933771625Z" level=info msg="Stop container \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\" with signal terminated" Apr 30 00:46:30.947189 systemd-networkd[1686]: lxc_health: Link DOWN Apr 30 00:46:30.947209 systemd-networkd[1686]: lxc_health: Lost carrier Apr 30 00:46:30.977842 kubelet[3658]: I0430 00:46:30.972251 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ecdb89e-d0b5-4e19-ad77-b200634425d7-cilium-config-path\") pod \"8ecdb89e-d0b5-4e19-ad77-b200634425d7\" (UID: \"8ecdb89e-d0b5-4e19-ad77-b200634425d7\") " Apr 30 00:46:30.977842 kubelet[3658]: I0430 00:46:30.972321 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ll4l\" (UniqueName: \"kubernetes.io/projected/8ecdb89e-d0b5-4e19-ad77-b200634425d7-kube-api-access-8ll4l\") pod \"8ecdb89e-d0b5-4e19-ad77-b200634425d7\" (UID: \"8ecdb89e-d0b5-4e19-ad77-b200634425d7\") " Apr 30 00:46:30.987011 kubelet[3658]: I0430 00:46:30.986952 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ecdb89e-d0b5-4e19-ad77-b200634425d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8ecdb89e-d0b5-4e19-ad77-b200634425d7" (UID: "8ecdb89e-d0b5-4e19-ad77-b200634425d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:46:30.991714 kubelet[3658]: I0430 00:46:30.991625 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecdb89e-d0b5-4e19-ad77-b200634425d7-kube-api-access-8ll4l" (OuterVolumeSpecName: "kube-api-access-8ll4l") pod "8ecdb89e-d0b5-4e19-ad77-b200634425d7" (UID: "8ecdb89e-d0b5-4e19-ad77-b200634425d7"). InnerVolumeSpecName "kube-api-access-8ll4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:46:31.027502 containerd[2141]: time="2025-04-30T00:46:31.027344866Z" level=info msg="shim disconnected" id=a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad namespace=k8s.io Apr 30 00:46:31.027502 containerd[2141]: time="2025-04-30T00:46:31.027489346Z" level=warning msg="cleaning up after shim disconnected" id=a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad namespace=k8s.io Apr 30 00:46:31.027850 containerd[2141]: time="2025-04-30T00:46:31.027511822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:31.047097 containerd[2141]: time="2025-04-30T00:46:31.047014846Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:46:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:46:31.053145 containerd[2141]: time="2025-04-30T00:46:31.053036002Z" level=info msg="StopContainer for \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\" returns successfully" Apr 30 00:46:31.054196 containerd[2141]: time="2025-04-30T00:46:31.053868958Z" level=info msg="StopPodSandbox for \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\"" Apr 30 00:46:31.054196 containerd[2141]: time="2025-04-30T00:46:31.053942590Z" level=info msg="Container to stop \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:31.054196 containerd[2141]: time="2025-04-30T00:46:31.053970466Z" level=info msg="Container to stop \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:31.054196 containerd[2141]: time="2025-04-30T00:46:31.054012958Z" level=info msg="Container to stop \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:31.054196 containerd[2141]: time="2025-04-30T00:46:31.054036874Z" level=info msg="Container to stop \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:31.054196 containerd[2141]: time="2025-04-30T00:46:31.054069502Z" level=info msg="Container to stop \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:46:31.074421 kubelet[3658]: I0430 00:46:31.074312 3658 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8ecdb89e-d0b5-4e19-ad77-b200634425d7-cilium-config-path\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.074421 kubelet[3658]: I0430 00:46:31.074365 3658 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8ll4l\" (UniqueName: \"kubernetes.io/projected/8ecdb89e-d0b5-4e19-ad77-b200634425d7-kube-api-access-8ll4l\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.074421 kubelet[3658]: E0430 00:46:31.074405 3658 configmap.go:199] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Apr 30 00:46:31.074789 kubelet[3658]: E0430 00:46:31.074500 3658 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-config-path podName:d9f15b7b-10de-45df-ac12-c941ea2d59ec nodeName:}" failed. No retries permitted until 2025-04-30 00:46:32.074472154 +0000 UTC m=+112.944112485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-config-path") pod "cilium-v5db4" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec") : configmap "cilium-config" not found Apr 30 00:46:31.109830 containerd[2141]: time="2025-04-30T00:46:31.107051794Z" level=info msg="shim disconnected" id=46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c namespace=k8s.io Apr 30 00:46:31.109830 containerd[2141]: time="2025-04-30T00:46:31.107128498Z" level=warning msg="cleaning up after shim disconnected" id=46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c namespace=k8s.io Apr 30 00:46:31.109830 containerd[2141]: time="2025-04-30T00:46:31.107154790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:31.142046 containerd[2141]: time="2025-04-30T00:46:31.140706478Z" level=info msg="TearDown network for sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" successfully" Apr 30 00:46:31.142046 containerd[2141]: time="2025-04-30T00:46:31.140779930Z" level=info msg="StopPodSandbox for \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" returns successfully" Apr 30 00:46:31.176705 kubelet[3658]: I0430 00:46:31.174523 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-host-proc-sys-kernel\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.176705 kubelet[3658]: I0430 00:46:31.174592 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9f15b7b-10de-45df-ac12-c941ea2d59ec-hubble-tls\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.176705 kubelet[3658]: I0430 00:46:31.174629 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-hostproc\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.177537 kubelet[3658]: I0430 00:46:31.177019 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9f15b7b-10de-45df-ac12-c941ea2d59ec-clustermesh-secrets\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.177678 kubelet[3658]: I0430 00:46:31.177579 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-etc-cni-netd\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.177678 kubelet[3658]: I0430 00:46:31.177625 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-cgroup\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.178029 kubelet[3658]: I0430 00:46:31.177700 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cni-path\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.178029 kubelet[3658]: I0430 00:46:31.177744 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-run\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.178029 kubelet[3658]: I0430 00:46:31.177789 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7gjq\" (UniqueName: \"kubernetes.io/projected/d9f15b7b-10de-45df-ac12-c941ea2d59ec-kube-api-access-k7gjq\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.178029 kubelet[3658]: I0430 00:46:31.177822 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-lib-modules\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.178029 kubelet[3658]: I0430 00:46:31.177855 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-host-proc-sys-net\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.178029 kubelet[3658]: I0430 00:46:31.177891 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-bpf-maps\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.178462 kubelet[3658]: I0430 00:46:31.177927 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-xtables-lock\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.178462 kubelet[3658]: I0430 00:46:31.177972 3658 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-config-path\") pod \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\" (UID: \"d9f15b7b-10de-45df-ac12-c941ea2d59ec\") " Apr 30 00:46:31.184834 kubelet[3658]: I0430 00:46:31.184761 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:46:31.184987 kubelet[3658]: I0430 00:46:31.184859 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.189622 kubelet[3658]: I0430 00:46:31.189542 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f15b7b-10de-45df-ac12-c941ea2d59ec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:46:31.189831 kubelet[3658]: I0430 00:46:31.189632 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-hostproc" (OuterVolumeSpecName: "hostproc") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.191383 kubelet[3658]: I0430 00:46:31.191231 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.191722 kubelet[3658]: I0430 00:46:31.191475 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.191722 kubelet[3658]: I0430 00:46:31.191521 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.191722 kubelet[3658]: I0430 00:46:31.191684 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.191934 kubelet[3658]: I0430 00:46:31.191767 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cni-path" (OuterVolumeSpecName: "cni-path") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.191934 kubelet[3658]: I0430 00:46:31.191807 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.192045 kubelet[3658]: I0430 00:46:31.191956 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.192770 kubelet[3658]: I0430 00:46:31.192165 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:46:31.200815 kubelet[3658]: I0430 00:46:31.200598 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f15b7b-10de-45df-ac12-c941ea2d59ec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:46:31.201398 kubelet[3658]: I0430 00:46:31.201283 3658 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f15b7b-10de-45df-ac12-c941ea2d59ec-kube-api-access-k7gjq" (OuterVolumeSpecName: "kube-api-access-k7gjq") pod "d9f15b7b-10de-45df-ac12-c941ea2d59ec" (UID: "d9f15b7b-10de-45df-ac12-c941ea2d59ec"). InnerVolumeSpecName "kube-api-access-k7gjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:46:31.278808 kubelet[3658]: I0430 00:46:31.278689 3658 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-bpf-maps\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.278808 kubelet[3658]: I0430 00:46:31.278734 3658 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-xtables-lock\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.278808 kubelet[3658]: I0430 00:46:31.278757 3658 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-config-path\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.278808 kubelet[3658]: I0430 00:46:31.278781 3658 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-host-proc-sys-kernel\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.278808 kubelet[3658]: I0430 00:46:31.278802 3658 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d9f15b7b-10de-45df-ac12-c941ea2d59ec-hubble-tls\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.278808 kubelet[3658]: I0430 00:46:31.278820 3658 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d9f15b7b-10de-45df-ac12-c941ea2d59ec-clustermesh-secrets\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.279243 kubelet[3658]: I0430 00:46:31.278839 3658 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-etc-cni-netd\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.279243 kubelet[3658]: I0430 00:46:31.278858 3658 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-cgroup\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.279243 kubelet[3658]: I0430 00:46:31.278876 3658 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-hostproc\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.279243 kubelet[3658]: I0430 00:46:31.278898 3658 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cni-path\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.279243 kubelet[3658]: I0430 00:46:31.278917 3658 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-cilium-run\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.279243 kubelet[3658]: I0430 00:46:31.278935 3658 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k7gjq\" (UniqueName: \"kubernetes.io/projected/d9f15b7b-10de-45df-ac12-c941ea2d59ec-kube-api-access-k7gjq\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.279243 kubelet[3658]: I0430 00:46:31.278953 3658 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-lib-modules\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.279243 kubelet[3658]: I0430 00:46:31.278976 3658 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d9f15b7b-10de-45df-ac12-c941ea2d59ec-host-proc-sys-net\") on node \"ip-172-31-27-157\" DevicePath \"\"" Apr 30 00:46:31.381286 kubelet[3658]: I0430 00:46:31.381237 3658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ecdb89e-d0b5-4e19-ad77-b200634425d7" path="/var/lib/kubelet/pods/8ecdb89e-d0b5-4e19-ad77-b200634425d7/volumes" Apr 30 00:46:31.654587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad-rootfs.mount: Deactivated successfully. Apr 30 00:46:31.654925 systemd[1]: var-lib-kubelet-pods-8ecdb89e\x2dd0b5\x2d4e19\x2dad77\x2db200634425d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8ll4l.mount: Deactivated successfully. Apr 30 00:46:31.655246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c-rootfs.mount: Deactivated successfully. Apr 30 00:46:31.655480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c-shm.mount: Deactivated successfully. Apr 30 00:46:31.655751 systemd[1]: var-lib-kubelet-pods-d9f15b7b\x2d10de\x2d45df\x2dac12\x2dc941ea2d59ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk7gjq.mount: Deactivated successfully. Apr 30 00:46:31.655984 systemd[1]: var-lib-kubelet-pods-d9f15b7b\x2d10de\x2d45df\x2dac12\x2dc941ea2d59ec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:46:31.656253 systemd[1]: var-lib-kubelet-pods-d9f15b7b\x2d10de\x2d45df\x2dac12\x2dc941ea2d59ec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:46:31.878653 kubelet[3658]: I0430 00:46:31.878527 3658 scope.go:117] "RemoveContainer" containerID="a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad" Apr 30 00:46:31.884064 containerd[2141]: time="2025-04-30T00:46:31.883413626Z" level=info msg="RemoveContainer for \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\"" Apr 30 00:46:31.890792 containerd[2141]: time="2025-04-30T00:46:31.890735246Z" level=info msg="RemoveContainer for \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\" returns successfully" Apr 30 00:46:31.891470 kubelet[3658]: I0430 00:46:31.891316 3658 scope.go:117] "RemoveContainer" containerID="73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c" Apr 30 00:46:31.894697 containerd[2141]: time="2025-04-30T00:46:31.894595070Z" level=info msg="RemoveContainer for \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\"" Apr 30 00:46:31.902296 containerd[2141]: time="2025-04-30T00:46:31.902235182Z" level=info msg="RemoveContainer for \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\" returns successfully" Apr 30 00:46:31.904510 kubelet[3658]: I0430 00:46:31.904194 3658 scope.go:117] "RemoveContainer" containerID="dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93" Apr 30 00:46:31.910202 containerd[2141]: time="2025-04-30T00:46:31.910158986Z" level=info msg="RemoveContainer for \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\"" Apr 30 00:46:31.917608 containerd[2141]: time="2025-04-30T00:46:31.917414642Z" level=info msg="RemoveContainer for \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\" returns successfully" Apr 30 00:46:31.918074 kubelet[3658]: I0430 00:46:31.917818 3658 scope.go:117] "RemoveContainer" containerID="6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2" Apr 30 00:46:31.919708 containerd[2141]: time="2025-04-30T00:46:31.919592282Z" level=info msg="RemoveContainer for \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\"" Apr 30 00:46:31.925855 containerd[2141]: time="2025-04-30T00:46:31.925799798Z" level=info msg="RemoveContainer for \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\" returns successfully" Apr 30 00:46:31.926189 kubelet[3658]: I0430 00:46:31.926122 3658 scope.go:117] "RemoveContainer" containerID="ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2" Apr 30 00:46:31.928198 containerd[2141]: time="2025-04-30T00:46:31.928148150Z" level=info msg="RemoveContainer for \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\"" Apr 30 00:46:31.933938 containerd[2141]: time="2025-04-30T00:46:31.933842762Z" level=info msg="RemoveContainer for \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\" returns successfully" Apr 30 00:46:31.934477 kubelet[3658]: I0430 00:46:31.934431 3658 scope.go:117] "RemoveContainer" containerID="a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad" Apr 30 00:46:31.934982 containerd[2141]: time="2025-04-30T00:46:31.934910930Z" level=error msg="ContainerStatus for \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\": not found" Apr 30 00:46:31.935220 kubelet[3658]: E0430 00:46:31.935140 3658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\": not found" containerID="a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad" Apr 30 00:46:31.935220 kubelet[3658]: I0430 00:46:31.935183 3658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad"} err="failed to get container status \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0885cb0e6c3db3a5987a86869a30b3224af35c30730f885a473c823d96107ad\": not found" Apr 30 00:46:31.935858 kubelet[3658]: I0430 00:46:31.935218 3658 scope.go:117] "RemoveContainer" containerID="73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c" Apr 30 00:46:31.935919 containerd[2141]: time="2025-04-30T00:46:31.935530898Z" level=error msg="ContainerStatus for \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\": not found" Apr 30 00:46:31.936354 kubelet[3658]: E0430 00:46:31.936062 3658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\": not found" containerID="73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c" Apr 30 00:46:31.936354 kubelet[3658]: I0430 00:46:31.936143 3658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c"} err="failed to get container status \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"73a6b974ae73d760f1c9f53a0e52009cc5cba638f8bd24ad43819a47ae2b1a9c\": not found" Apr 30 00:46:31.936354 kubelet[3658]: I0430 00:46:31.936202 3658 scope.go:117] "RemoveContainer" containerID="dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93" Apr 30 00:46:31.936608 containerd[2141]: time="2025-04-30T00:46:31.936531206Z" level=error msg="ContainerStatus for \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\": not found" Apr 30 00:46:31.937027 kubelet[3658]: E0430 00:46:31.936781 3658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\": not found" containerID="dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93" Apr 30 00:46:31.937027 kubelet[3658]: I0430 00:46:31.936818 3658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93"} err="failed to get container status \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbde84556221ae47079dea1a3ce930d7e7f45cf62160805bd36b47fa00634c93\": not found" Apr 30 00:46:31.937027 kubelet[3658]: I0430 00:46:31.936865 3658 scope.go:117] "RemoveContainer" containerID="6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2" Apr 30 00:46:31.937470 kubelet[3658]: E0430 00:46:31.937387 3658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\": not found" containerID="6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2" Apr 30 00:46:31.937548 containerd[2141]: time="2025-04-30T00:46:31.937123718Z" level=error msg="ContainerStatus for \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\": not found" Apr 30 00:46:31.937775 kubelet[3658]: I0430 00:46:31.937635 3658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2"} err="failed to get container status \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f4b681e596d6028620262afac95988c67407663664554e25bc09079faecdbe2\": not found" Apr 30 00:46:31.937775 kubelet[3658]: I0430 00:46:31.937705 3658 scope.go:117] "RemoveContainer" containerID="ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2" Apr 30 00:46:31.938240 containerd[2141]: time="2025-04-30T00:46:31.938188142Z" level=error msg="ContainerStatus for \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\": not found" Apr 30 00:46:31.938460 kubelet[3658]: E0430 00:46:31.938400 3658 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\": not found" containerID="ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2" Apr 30 00:46:31.938562 kubelet[3658]: I0430 00:46:31.938462 3658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2"} err="failed to get container status \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff6d23e33a704e15278907cbe181a56f03d6a86855f38648efd965ab273920d2\": not found" Apr 30 00:46:32.545526 sshd[5273]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:32.551179 systemd[1]: sshd@25-172.31.27.157:22-147.75.109.163:47780.service: Deactivated successfully. Apr 30 00:46:32.551504 systemd-logind[2113]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:46:32.559423 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:46:32.563257 systemd-logind[2113]: Removed session 26. Apr 30 00:46:32.589298 systemd[1]: Started sshd@26-172.31.27.157:22-147.75.109.163:47782.service - OpenSSH per-connection server daemon (147.75.109.163:47782). Apr 30 00:46:32.858876 sshd[5440]: Accepted publickey for core from 147.75.109.163 port 47782 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:32.861163 sshd[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:32.869908 systemd-logind[2113]: New session 27 of user core. Apr 30 00:46:32.874156 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 30 00:46:33.176806 ntpd[2089]: Deleting interface #10 lxc_health, fe80::a863:4dff:fef8:dfe2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Apr 30 00:46:33.177348 ntpd[2089]: 30 Apr 00:46:33 ntpd[2089]: Deleting interface #10 lxc_health, fe80::a863:4dff:fef8:dfe2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs Apr 30 00:46:33.382136 kubelet[3658]: I0430 00:46:33.382083 3658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f15b7b-10de-45df-ac12-c941ea2d59ec" path="/var/lib/kubelet/pods/d9f15b7b-10de-45df-ac12-c941ea2d59ec/volumes" Apr 30 00:46:34.220787 kubelet[3658]: I0430 00:46:34.220104 3658 topology_manager.go:215] "Topology Admit Handler" podUID="dd50990b-9312-4de4-82fa-d8b4f26d60a5" podNamespace="kube-system" podName="cilium-kmg65" Apr 30 00:46:34.220962 kubelet[3658]: E0430 00:46:34.220835 3658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9f15b7b-10de-45df-ac12-c941ea2d59ec" containerName="apply-sysctl-overwrites" Apr 30 00:46:34.220962 kubelet[3658]: E0430 00:46:34.220887 3658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9f15b7b-10de-45df-ac12-c941ea2d59ec" containerName="mount-bpf-fs" Apr 30 00:46:34.220962 kubelet[3658]: E0430 00:46:34.220905 3658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9f15b7b-10de-45df-ac12-c941ea2d59ec" containerName="clean-cilium-state" Apr 30 00:46:34.220962 kubelet[3658]: E0430 00:46:34.220921 3658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ecdb89e-d0b5-4e19-ad77-b200634425d7" containerName="cilium-operator" Apr 30 00:46:34.220962 kubelet[3658]: E0430 00:46:34.220937 3658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9f15b7b-10de-45df-ac12-c941ea2d59ec" containerName="mount-cgroup" Apr 30 00:46:34.221471 kubelet[3658]: E0430 00:46:34.220979 3658 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9f15b7b-10de-45df-ac12-c941ea2d59ec" containerName="cilium-agent" Apr 30 00:46:34.221471 kubelet[3658]: I0430 00:46:34.221023 3658 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f15b7b-10de-45df-ac12-c941ea2d59ec" containerName="cilium-agent" Apr 30 00:46:34.221471 kubelet[3658]: I0430 00:46:34.221166 3658 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecdb89e-d0b5-4e19-ad77-b200634425d7" containerName="cilium-operator" Apr 30 00:46:34.230742 sshd[5440]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:34.251451 systemd[1]: sshd@26-172.31.27.157:22-147.75.109.163:47782.service: Deactivated successfully. Apr 30 00:46:34.265871 systemd[1]: session-27.scope: Deactivated successfully. Apr 30 00:46:34.269888 systemd-logind[2113]: Session 27 logged out. Waiting for processes to exit. Apr 30 00:46:34.286601 systemd[1]: Started sshd@27-172.31.27.157:22-147.75.109.163:47786.service - OpenSSH per-connection server daemon (147.75.109.163:47786). Apr 30 00:46:34.290791 systemd-logind[2113]: Removed session 27. Apr 30 00:46:34.298923 kubelet[3658]: I0430 00:46:34.298869 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-host-proc-sys-net\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299036 kubelet[3658]: I0430 00:46:34.298956 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-xtables-lock\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299036 kubelet[3658]: I0430 00:46:34.298998 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-etc-cni-netd\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299193 kubelet[3658]: I0430 00:46:34.299033 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dd50990b-9312-4de4-82fa-d8b4f26d60a5-cilium-ipsec-secrets\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299193 kubelet[3658]: I0430 00:46:34.299071 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-cilium-run\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299193 kubelet[3658]: I0430 00:46:34.299106 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-bpf-maps\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299193 kubelet[3658]: I0430 00:46:34.299141 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-cilium-cgroup\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299193 kubelet[3658]: I0430 00:46:34.299173 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-cni-path\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299455 kubelet[3658]: I0430 00:46:34.299223 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-lib-modules\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299455 kubelet[3658]: I0430 00:46:34.299260 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd50990b-9312-4de4-82fa-d8b4f26d60a5-clustermesh-secrets\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299455 kubelet[3658]: I0430 00:46:34.299295 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-host-proc-sys-kernel\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299455 kubelet[3658]: I0430 00:46:34.299328 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd50990b-9312-4de4-82fa-d8b4f26d60a5-hostproc\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299455 kubelet[3658]: I0430 00:46:34.299362 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stx6q\" (UniqueName: \"kubernetes.io/projected/dd50990b-9312-4de4-82fa-d8b4f26d60a5-kube-api-access-stx6q\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299455 kubelet[3658]: I0430 00:46:34.299396 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd50990b-9312-4de4-82fa-d8b4f26d60a5-hubble-tls\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.299799 kubelet[3658]: I0430 00:46:34.299432 3658 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd50990b-9312-4de4-82fa-d8b4f26d60a5-cilium-config-path\") pod \"cilium-kmg65\" (UID: \"dd50990b-9312-4de4-82fa-d8b4f26d60a5\") " pod="kube-system/cilium-kmg65" Apr 30 00:46:34.566381 containerd[2141]: time="2025-04-30T00:46:34.566229267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmg65,Uid:dd50990b-9312-4de4-82fa-d8b4f26d60a5,Namespace:kube-system,Attempt:0,}" Apr 30 00:46:34.617342 containerd[2141]: time="2025-04-30T00:46:34.617159475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:46:34.618305 containerd[2141]: time="2025-04-30T00:46:34.618094767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:46:34.618545 containerd[2141]: time="2025-04-30T00:46:34.618346275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:34.619096 containerd[2141]: time="2025-04-30T00:46:34.618927375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:46:34.637507 sshd[5454]: Accepted publickey for core from 147.75.109.163 port 47786 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:34.642815 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:34.652300 kubelet[3658]: E0430 00:46:34.652233 3658 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:46:34.668514 systemd-logind[2113]: New session 28 of user core. Apr 30 00:46:34.671904 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 30 00:46:34.715552 containerd[2141]: time="2025-04-30T00:46:34.715453804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmg65,Uid:dd50990b-9312-4de4-82fa-d8b4f26d60a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\"" Apr 30 00:46:34.721163 containerd[2141]: time="2025-04-30T00:46:34.720812332Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:46:34.744859 containerd[2141]: time="2025-04-30T00:46:34.744787972Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"058921988e7d37efc869acd5caad354336d81439f51b4652c220dd9032f97d89\"" Apr 30 00:46:34.746342 containerd[2141]: time="2025-04-30T00:46:34.746109448Z" level=info msg="StartContainer for \"058921988e7d37efc869acd5caad354336d81439f51b4652c220dd9032f97d89\"" Apr 30 00:46:34.842079 containerd[2141]: time="2025-04-30T00:46:34.841903673Z" level=info msg="StartContainer for \"058921988e7d37efc869acd5caad354336d81439f51b4652c220dd9032f97d89\" returns successfully" Apr 30 00:46:34.857059 sshd[5454]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:34.868431 systemd[1]: sshd@27-172.31.27.157:22-147.75.109.163:47786.service: Deactivated successfully. Apr 30 00:46:34.880102 systemd[1]: session-28.scope: Deactivated successfully. Apr 30 00:46:34.882824 systemd-logind[2113]: Session 28 logged out. Waiting for processes to exit. Apr 30 00:46:34.885934 systemd-logind[2113]: Removed session 28. Apr 30 00:46:34.906375 systemd[1]: Started sshd@28-172.31.27.157:22-147.75.109.163:47790.service - OpenSSH per-connection server daemon (147.75.109.163:47790). Apr 30 00:46:34.949284 containerd[2141]: time="2025-04-30T00:46:34.949191545Z" level=info msg="shim disconnected" id=058921988e7d37efc869acd5caad354336d81439f51b4652c220dd9032f97d89 namespace=k8s.io Apr 30 00:46:34.949284 containerd[2141]: time="2025-04-30T00:46:34.949274309Z" level=warning msg="cleaning up after shim disconnected" id=058921988e7d37efc869acd5caad354336d81439f51b4652c220dd9032f97d89 namespace=k8s.io Apr 30 00:46:34.950037 containerd[2141]: time="2025-04-30T00:46:34.949299161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:35.178112 sshd[5555]: Accepted publickey for core from 147.75.109.163 port 47790 ssh2: RSA SHA256:jA4E/E4F85fdbuY20NmIGoEsn2jbc3vfN6P5NfpO3KQ Apr 30 00:46:35.179388 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:46:35.187078 systemd-logind[2113]: New session 29 of user core. Apr 30 00:46:35.194303 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 30 00:46:35.916508 containerd[2141]: time="2025-04-30T00:46:35.915548070Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:46:35.950446 containerd[2141]: time="2025-04-30T00:46:35.950314590Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"275b72269e316f9b6200d8eb18c75ddc37078aeddaa523b968bd2eddb92bb1f6\"" Apr 30 00:46:35.952003 containerd[2141]: time="2025-04-30T00:46:35.951931554Z" level=info msg="StartContainer for \"275b72269e316f9b6200d8eb18c75ddc37078aeddaa523b968bd2eddb92bb1f6\"" Apr 30 00:46:36.055233 containerd[2141]: time="2025-04-30T00:46:36.054976971Z" level=info msg="StartContainer for \"275b72269e316f9b6200d8eb18c75ddc37078aeddaa523b968bd2eddb92bb1f6\" returns successfully" Apr 30 00:46:36.107403 containerd[2141]: time="2025-04-30T00:46:36.106647783Z" level=info msg="shim disconnected" id=275b72269e316f9b6200d8eb18c75ddc37078aeddaa523b968bd2eddb92bb1f6 namespace=k8s.io Apr 30 00:46:36.107403 containerd[2141]: time="2025-04-30T00:46:36.106879011Z" level=warning msg="cleaning up after shim disconnected" id=275b72269e316f9b6200d8eb18c75ddc37078aeddaa523b968bd2eddb92bb1f6 namespace=k8s.io Apr 30 00:46:36.107403 containerd[2141]: time="2025-04-30T00:46:36.106923351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:36.416542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-275b72269e316f9b6200d8eb18c75ddc37078aeddaa523b968bd2eddb92bb1f6-rootfs.mount: Deactivated successfully. Apr 30 00:46:36.922279 containerd[2141]: time="2025-04-30T00:46:36.921795079Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:46:36.975405 containerd[2141]: time="2025-04-30T00:46:36.975349591Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0f0ecf5657049722088efdc7762e4c739d1b7fb49ec5e18e211a235d78982503\"" Apr 30 00:46:36.976855 containerd[2141]: time="2025-04-30T00:46:36.976795759Z" level=info msg="StartContainer for \"0f0ecf5657049722088efdc7762e4c739d1b7fb49ec5e18e211a235d78982503\"" Apr 30 00:46:37.088024 containerd[2141]: time="2025-04-30T00:46:37.087956296Z" level=info msg="StartContainer for \"0f0ecf5657049722088efdc7762e4c739d1b7fb49ec5e18e211a235d78982503\" returns successfully" Apr 30 00:46:37.133319 containerd[2141]: time="2025-04-30T00:46:37.133004380Z" level=info msg="shim disconnected" id=0f0ecf5657049722088efdc7762e4c739d1b7fb49ec5e18e211a235d78982503 namespace=k8s.io Apr 30 00:46:37.133319 containerd[2141]: time="2025-04-30T00:46:37.133075516Z" level=warning msg="cleaning up after shim disconnected" id=0f0ecf5657049722088efdc7762e4c739d1b7fb49ec5e18e211a235d78982503 namespace=k8s.io Apr 30 00:46:37.133319 containerd[2141]: time="2025-04-30T00:46:37.133095268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:37.416922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f0ecf5657049722088efdc7762e4c739d1b7fb49ec5e18e211a235d78982503-rootfs.mount: Deactivated successfully. Apr 30 00:46:37.928066 containerd[2141]: time="2025-04-30T00:46:37.927992828Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:46:37.962475 containerd[2141]: time="2025-04-30T00:46:37.962313848Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4974db7cbd8841c885d03d3894e16319a3a707dcb8794dcffd96be123ed8a133\"" Apr 30 00:46:37.963876 containerd[2141]: time="2025-04-30T00:46:37.963499424Z" level=info msg="StartContainer for \"4974db7cbd8841c885d03d3894e16319a3a707dcb8794dcffd96be123ed8a133\"" Apr 30 00:46:38.076896 containerd[2141]: time="2025-04-30T00:46:38.076453685Z" level=info msg="StartContainer for \"4974db7cbd8841c885d03d3894e16319a3a707dcb8794dcffd96be123ed8a133\" returns successfully" Apr 30 00:46:38.123474 containerd[2141]: time="2025-04-30T00:46:38.123116153Z" level=info msg="shim disconnected" id=4974db7cbd8841c885d03d3894e16319a3a707dcb8794dcffd96be123ed8a133 namespace=k8s.io Apr 30 00:46:38.123474 containerd[2141]: time="2025-04-30T00:46:38.123189665Z" level=warning msg="cleaning up after shim disconnected" id=4974db7cbd8841c885d03d3894e16319a3a707dcb8794dcffd96be123ed8a133 namespace=k8s.io Apr 30 00:46:38.123474 containerd[2141]: time="2025-04-30T00:46:38.123209381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:46:38.416788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4974db7cbd8841c885d03d3894e16319a3a707dcb8794dcffd96be123ed8a133-rootfs.mount: Deactivated successfully. Apr 30 00:46:38.938450 containerd[2141]: time="2025-04-30T00:46:38.938377473Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:46:38.970862 containerd[2141]: time="2025-04-30T00:46:38.970793145Z" level=info msg="CreateContainer within sandbox \"51c0def4522f5f0963c6f50e774e67937d0c5c0efecbe235ebc62ee9bdd78922\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7283d73f68984ff36639993ed3d3aa243afb89f83597f3e64532fad09d3cf521\"" Apr 30 00:46:38.974716 containerd[2141]: time="2025-04-30T00:46:38.972076305Z" level=info msg="StartContainer for \"7283d73f68984ff36639993ed3d3aa243afb89f83597f3e64532fad09d3cf521\"" Apr 30 00:46:39.080405 containerd[2141]: time="2025-04-30T00:46:39.080221134Z" level=info msg="StartContainer for \"7283d73f68984ff36639993ed3d3aa243afb89f83597f3e64532fad09d3cf521\" returns successfully" Apr 30 00:46:39.396448 containerd[2141]: time="2025-04-30T00:46:39.394112299Z" level=info msg="StopPodSandbox for \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\"" Apr 30 00:46:39.396448 containerd[2141]: time="2025-04-30T00:46:39.394328383Z" level=info msg="TearDown network for sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" successfully" Apr 30 00:46:39.396448 containerd[2141]: time="2025-04-30T00:46:39.394376323Z" level=info msg="StopPodSandbox for \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" returns successfully" Apr 30 00:46:39.396448 containerd[2141]: time="2025-04-30T00:46:39.396272131Z" level=info msg="RemovePodSandbox for \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\"" Apr 30 00:46:39.396448 containerd[2141]: time="2025-04-30T00:46:39.396319063Z" level=info msg="Forcibly stopping sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\"" Apr 30 00:46:39.396863 containerd[2141]: time="2025-04-30T00:46:39.396490783Z" level=info msg="TearDown network for sandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" successfully" Apr 30 00:46:39.405496 containerd[2141]: time="2025-04-30T00:46:39.403563643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:39.405496 containerd[2141]: time="2025-04-30T00:46:39.403721143Z" level=info msg="RemovePodSandbox \"46739ed974d648b7fac9f8b4ab1fec802b2e5818d17848d38409a894b7166e3c\" returns successfully" Apr 30 00:46:39.405496 containerd[2141]: time="2025-04-30T00:46:39.404719483Z" level=info msg="StopPodSandbox for \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\"" Apr 30 00:46:39.405496 containerd[2141]: time="2025-04-30T00:46:39.404866519Z" level=info msg="TearDown network for sandbox \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\" successfully" Apr 30 00:46:39.405496 containerd[2141]: time="2025-04-30T00:46:39.404890903Z" level=info msg="StopPodSandbox for \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\" returns successfully" Apr 30 00:46:39.407846 containerd[2141]: time="2025-04-30T00:46:39.406622575Z" level=info msg="RemovePodSandbox for \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\"" Apr 30 00:46:39.407846 containerd[2141]: time="2025-04-30T00:46:39.407079103Z" level=info msg="Forcibly stopping sandbox \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\"" Apr 30 00:46:39.407846 containerd[2141]: time="2025-04-30T00:46:39.407437531Z" level=info msg="TearDown network for sandbox \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\" successfully" Apr 30 00:46:39.423926 containerd[2141]: time="2025-04-30T00:46:39.422966671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 00:46:39.423926 containerd[2141]: time="2025-04-30T00:46:39.423064711Z" level=info msg="RemovePodSandbox \"ae51ab646db7606f6f1f1591164c42c5000134c524b5b9a2ba92001fcc638ade\" returns successfully" Apr 30 00:46:39.872708 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 00:46:44.079634 systemd-networkd[1686]: lxc_health: Link UP Apr 30 00:46:44.085866 systemd-networkd[1686]: lxc_health: Gained carrier Apr 30 00:46:44.110479 (udev-worker)[6301]: Network interface NamePolicy= disabled on kernel command line. Apr 30 00:46:44.610699 kubelet[3658]: I0430 00:46:44.609206 3658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kmg65" podStartSLOduration=10.609183073 podStartE2EDuration="10.609183073s" podCreationTimestamp="2025-04-30 00:46:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:46:39.98648617 +0000 UTC m=+120.856126513" watchObservedRunningTime="2025-04-30 00:46:44.609183073 +0000 UTC m=+125.478823416" Apr 30 00:46:46.071945 systemd-networkd[1686]: lxc_health: Gained IPv6LL Apr 30 00:46:48.176829 ntpd[2089]: Listen normally on 13 lxc_health [fe80::6c85:d1ff:fe31:c0b5%14]:123 Apr 30 00:46:48.179260 ntpd[2089]: 30 Apr 00:46:48 ntpd[2089]: Listen normally on 13 lxc_health [fe80::6c85:d1ff:fe31:c0b5%14]:123 Apr 30 00:46:51.208950 sshd[5555]: pam_unix(sshd:session): session closed for user core Apr 30 00:46:51.218684 systemd[1]: sshd@28-172.31.27.157:22-147.75.109.163:47790.service: Deactivated successfully. Apr 30 00:46:51.230731 systemd[1]: session-29.scope: Deactivated successfully. Apr 30 00:46:51.235350 systemd-logind[2113]: Session 29 logged out. Waiting for processes to exit. Apr 30 00:46:51.239229 systemd-logind[2113]: Removed session 29. Apr 30 00:47:04.747852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6191946b84b69dc326934250cab35a785e01e0201133bcbd4ada2b4656b9e91f-rootfs.mount: Deactivated successfully. Apr 30 00:47:04.784576 containerd[2141]: time="2025-04-30T00:47:04.784420305Z" level=info msg="shim disconnected" id=6191946b84b69dc326934250cab35a785e01e0201133bcbd4ada2b4656b9e91f namespace=k8s.io Apr 30 00:47:04.784576 containerd[2141]: time="2025-04-30T00:47:04.784502721Z" level=warning msg="cleaning up after shim disconnected" id=6191946b84b69dc326934250cab35a785e01e0201133bcbd4ada2b4656b9e91f namespace=k8s.io Apr 30 00:47:04.784576 containerd[2141]: time="2025-04-30T00:47:04.784523349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:05.024833 kubelet[3658]: I0430 00:47:05.024063 3658 scope.go:117] "RemoveContainer" containerID="6191946b84b69dc326934250cab35a785e01e0201133bcbd4ada2b4656b9e91f" Apr 30 00:47:05.030937 containerd[2141]: time="2025-04-30T00:47:05.030868483Z" level=info msg="CreateContainer within sandbox \"26a5dbf56b6307b05b0ffaa06d178f182093538e2b0d08da54ff53d79bbfe347\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 30 00:47:05.057236 containerd[2141]: time="2025-04-30T00:47:05.057055195Z" level=info msg="CreateContainer within sandbox \"26a5dbf56b6307b05b0ffaa06d178f182093538e2b0d08da54ff53d79bbfe347\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2d3133cd210037b851a2c6417f1823d65b41603e75d03e14c81b5282446f8142\"" Apr 30 00:47:05.058339 containerd[2141]: time="2025-04-30T00:47:05.057960427Z" level=info msg="StartContainer for \"2d3133cd210037b851a2c6417f1823d65b41603e75d03e14c81b5282446f8142\"" Apr 30 00:47:05.177880 containerd[2141]: time="2025-04-30T00:47:05.177593155Z" level=info msg="StartContainer for \"2d3133cd210037b851a2c6417f1823d65b41603e75d03e14c81b5282446f8142\" returns successfully" Apr 30 00:47:09.860445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06d16203f0decee5694c8dd6b8850b593771780f48597c6c436f026b7db894ef-rootfs.mount: Deactivated successfully. Apr 30 00:47:09.875028 containerd[2141]: time="2025-04-30T00:47:09.874739019Z" level=info msg="shim disconnected" id=06d16203f0decee5694c8dd6b8850b593771780f48597c6c436f026b7db894ef namespace=k8s.io Apr 30 00:47:09.875028 containerd[2141]: time="2025-04-30T00:47:09.874819275Z" level=warning msg="cleaning up after shim disconnected" id=06d16203f0decee5694c8dd6b8850b593771780f48597c6c436f026b7db894ef namespace=k8s.io Apr 30 00:47:09.875028 containerd[2141]: time="2025-04-30T00:47:09.874842459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:47:09.897241 containerd[2141]: time="2025-04-30T00:47:09.897038211Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:47:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:47:10.042298 kubelet[3658]: I0430 00:47:10.042259 3658 scope.go:117] "RemoveContainer" containerID="06d16203f0decee5694c8dd6b8850b593771780f48597c6c436f026b7db894ef" Apr 30 00:47:10.047434 containerd[2141]: time="2025-04-30T00:47:10.046784663Z" level=info msg="CreateContainer within sandbox \"1be3abfb5cf9c04424c6d38c0d065b4669cb3b37494e2c78c6448c7835612340\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 30 00:47:10.075439 containerd[2141]: time="2025-04-30T00:47:10.075344568Z" level=info msg="CreateContainer within sandbox \"1be3abfb5cf9c04424c6d38c0d065b4669cb3b37494e2c78c6448c7835612340\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e7a26529ef093c025075cce067fa8951bce73882fe5cc364aa8e02f5917c44b6\"" Apr 30 00:47:10.076714 containerd[2141]: time="2025-04-30T00:47:10.076425648Z" level=info msg="StartContainer for \"e7a26529ef093c025075cce067fa8951bce73882fe5cc364aa8e02f5917c44b6\"" Apr 30 00:47:10.192832 containerd[2141]: time="2025-04-30T00:47:10.192724224Z" level=info msg="StartContainer for \"e7a26529ef093c025075cce067fa8951bce73882fe5cc364aa8e02f5917c44b6\" returns successfully" Apr 30 00:47:13.052119 kubelet[3658]: E0430 00:47:13.051787 3658 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.157:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-157?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"