Mar 19 11:33:05.159058 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 19 11:33:05.159102 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Mar 19 10:15:40 -00 2025 Mar 19 11:33:05.159127 kernel: KASLR disabled due to lack of seed Mar 19 11:33:05.159143 kernel: efi: EFI v2.7 by EDK II Mar 19 11:33:05.159158 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Mar 19 11:33:05.159173 kernel: secureboot: Secure boot disabled Mar 19 11:33:05.159190 kernel: ACPI: Early table checksum verification disabled Mar 19 11:33:05.159205 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 19 11:33:05.159220 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 19 11:33:05.159235 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 19 11:33:05.159255 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 19 11:33:05.159270 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 19 11:33:05.159285 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 19 11:33:05.159359 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 19 11:33:05.159384 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 19 11:33:05.159406 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 19 11:33:05.159423 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 19 11:33:05.159439 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 19 11:33:05.159455 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 19 11:33:05.159471 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 19 11:33:05.159487 kernel: printk: bootconsole [uart0] enabled Mar 19 11:33:05.159503 kernel: NUMA: Failed to initialise from firmware Mar 19 11:33:05.159519 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 19 11:33:05.159535 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 19 11:33:05.159551 kernel: Zone ranges: Mar 19 11:33:05.159567 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 19 11:33:05.159587 kernel: DMA32 empty Mar 19 11:33:05.159603 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 19 11:33:05.159619 kernel: Movable zone start for each node Mar 19 11:33:05.159634 kernel: Early memory node ranges Mar 19 11:33:05.159650 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 19 11:33:05.159666 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 19 11:33:05.159681 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 19 11:33:05.159697 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 19 11:33:05.159713 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 19 11:33:05.159728 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 19 11:33:05.159744 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 19 11:33:05.159760 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 19 11:33:05.159780 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 19 11:33:05.159797 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 19 11:33:05.159820 kernel: psci: probing for conduit method from ACPI. Mar 19 11:33:05.159837 kernel: psci: PSCIv1.0 detected in firmware. Mar 19 11:33:05.159854 kernel: psci: Using standard PSCI v0.2 function IDs Mar 19 11:33:05.159875 kernel: psci: Trusted OS migration not required Mar 19 11:33:05.159891 kernel: psci: SMC Calling Convention v1.1 Mar 19 11:33:05.159908 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 19 11:33:05.159924 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 19 11:33:05.159941 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 19 11:33:05.159957 kernel: Detected PIPT I-cache on CPU0 Mar 19 11:33:05.159974 kernel: CPU features: detected: GIC system register CPU interface Mar 19 11:33:05.159990 kernel: CPU features: detected: Spectre-v2 Mar 19 11:33:05.160007 kernel: CPU features: detected: Spectre-v3a Mar 19 11:33:05.160023 kernel: CPU features: detected: Spectre-BHB Mar 19 11:33:05.160039 kernel: CPU features: detected: ARM erratum 1742098 Mar 19 11:33:05.160055 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 19 11:33:05.160077 kernel: alternatives: applying boot alternatives Mar 19 11:33:05.160095 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:33:05.160113 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:33:05.160130 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:33:05.160147 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:33:05.160163 kernel: Fallback order for Node 0: 0 Mar 19 11:33:05.160180 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 19 11:33:05.160196 kernel: Policy zone: Normal Mar 19 11:33:05.160212 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:33:05.160229 kernel: software IO TLB: area num 2. Mar 19 11:33:05.160250 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 19 11:33:05.160267 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Mar 19 11:33:05.160284 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 19 11:33:05.160316 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:33:05.160362 kernel: rcu: RCU event tracing is enabled. Mar 19 11:33:05.160381 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 19 11:33:05.160398 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:33:05.160415 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:33:05.160432 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:33:05.160449 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 19 11:33:05.160466 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 19 11:33:05.160490 kernel: GICv3: 96 SPIs implemented Mar 19 11:33:05.160507 kernel: GICv3: 0 Extended SPIs implemented Mar 19 11:33:05.160523 kernel: Root IRQ handler: gic_handle_irq Mar 19 11:33:05.160540 kernel: GICv3: GICv3 features: 16 PPIs Mar 19 11:33:05.160556 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 19 11:33:05.160574 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 19 11:33:05.160590 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 19 11:33:05.160608 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 19 11:33:05.160624 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 19 11:33:05.160641 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 19 11:33:05.160658 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 19 11:33:05.160674 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:33:05.160696 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 19 11:33:05.160714 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 19 11:33:05.160731 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 19 11:33:05.160748 kernel: Console: colour dummy device 80x25 Mar 19 11:33:05.160765 kernel: printk: console [tty1] enabled Mar 19 11:33:05.160782 kernel: ACPI: Core revision 20230628 Mar 19 11:33:05.160799 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 19 11:33:05.160817 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:33:05.160834 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:33:05.160853 kernel: landlock: Up and running. Mar 19 11:33:05.160875 kernel: SELinux: Initializing. Mar 19 11:33:05.160892 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:33:05.160909 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:33:05.160927 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:33:05.160945 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 19 11:33:05.160962 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:33:05.160979 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:33:05.160996 kernel: Platform MSI: ITS@0x10080000 domain created Mar 19 11:33:05.161018 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 19 11:33:05.161036 kernel: Remapping and enabling EFI services. Mar 19 11:33:05.161053 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:33:05.161070 kernel: Detected PIPT I-cache on CPU1 Mar 19 11:33:05.161087 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 19 11:33:05.161105 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 19 11:33:05.161122 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 19 11:33:05.161139 kernel: smp: Brought up 1 node, 2 CPUs Mar 19 11:33:05.161156 kernel: SMP: Total of 2 processors activated. Mar 19 11:33:05.161172 kernel: CPU features: detected: 32-bit EL0 Support Mar 19 11:33:05.161194 kernel: CPU features: detected: 32-bit EL1 Support Mar 19 11:33:05.161211 kernel: CPU features: detected: CRC32 instructions Mar 19 11:33:05.161240 kernel: CPU: All CPU(s) started at EL1 Mar 19 11:33:05.161262 kernel: alternatives: applying system-wide alternatives Mar 19 11:33:05.161279 kernel: devtmpfs: initialized Mar 19 11:33:05.161297 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:33:05.161342 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 19 11:33:05.161360 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:33:05.161378 kernel: SMBIOS 3.0.0 present. Mar 19 11:33:05.161402 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 19 11:33:05.161420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:33:05.161438 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 19 11:33:05.161455 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 19 11:33:05.161473 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 19 11:33:05.161491 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:33:05.161509 kernel: audit: type=2000 audit(0.218:1): state=initialized audit_enabled=0 res=1 Mar 19 11:33:05.161531 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:33:05.161549 kernel: cpuidle: using governor menu Mar 19 11:33:05.161567 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 19 11:33:05.161585 kernel: ASID allocator initialised with 65536 entries Mar 19 11:33:05.161622 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:33:05.161641 kernel: Serial: AMBA PL011 UART driver Mar 19 11:33:05.161659 kernel: Modules: 17760 pages in range for non-PLT usage Mar 19 11:33:05.161676 kernel: Modules: 509280 pages in range for PLT usage Mar 19 11:33:05.161694 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:33:05.161718 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:33:05.161736 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 19 11:33:05.161753 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 19 11:33:05.161771 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:33:05.161789 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:33:05.161806 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 19 11:33:05.161824 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 19 11:33:05.161841 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:33:05.161859 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:33:05.161881 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:33:05.161899 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:33:05.161916 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:33:05.161934 kernel: ACPI: Interpreter enabled Mar 19 11:33:05.161951 kernel: ACPI: Using GIC for interrupt routing Mar 19 11:33:05.161969 kernel: ACPI: MCFG table detected, 1 entries Mar 19 11:33:05.161986 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 19 11:33:05.162294 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:33:05.168630 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 19 11:33:05.168838 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 19 11:33:05.169043 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 19 11:33:05.169249 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 19 11:33:05.169273 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 19 11:33:05.169292 kernel: acpiphp: Slot [1] registered Mar 19 11:33:05.169369 kernel: acpiphp: Slot [2] registered Mar 19 11:33:05.169389 kernel: acpiphp: Slot [3] registered Mar 19 11:33:05.169417 kernel: acpiphp: Slot [4] registered Mar 19 11:33:05.169435 kernel: acpiphp: Slot [5] registered Mar 19 11:33:05.169453 kernel: acpiphp: Slot [6] registered Mar 19 11:33:05.169470 kernel: acpiphp: Slot [7] registered Mar 19 11:33:05.169487 kernel: acpiphp: Slot [8] registered Mar 19 11:33:05.169505 kernel: acpiphp: Slot [9] registered Mar 19 11:33:05.169522 kernel: acpiphp: Slot [10] registered Mar 19 11:33:05.169540 kernel: acpiphp: Slot [11] registered Mar 19 11:33:05.169557 kernel: acpiphp: Slot [12] registered Mar 19 11:33:05.169575 kernel: acpiphp: Slot [13] registered Mar 19 11:33:05.169614 kernel: acpiphp: Slot [14] registered Mar 19 11:33:05.169634 kernel: acpiphp: Slot [15] registered Mar 19 11:33:05.169651 kernel: acpiphp: Slot [16] registered Mar 19 11:33:05.169669 kernel: acpiphp: Slot [17] registered Mar 19 11:33:05.169686 kernel: acpiphp: Slot [18] registered Mar 19 11:33:05.169704 kernel: acpiphp: Slot [19] registered Mar 19 11:33:05.169721 kernel: acpiphp: Slot [20] registered Mar 19 11:33:05.169739 kernel: acpiphp: Slot [21] registered Mar 19 11:33:05.169756 kernel: acpiphp: Slot [22] registered Mar 19 11:33:05.169780 kernel: acpiphp: Slot [23] registered Mar 19 11:33:05.169798 kernel: acpiphp: Slot [24] registered Mar 19 11:33:05.169815 kernel: acpiphp: Slot [25] registered Mar 19 11:33:05.169833 kernel: acpiphp: Slot [26] registered Mar 19 11:33:05.169850 kernel: acpiphp: Slot [27] registered Mar 19 11:33:05.169867 kernel: acpiphp: Slot [28] registered Mar 19 11:33:05.169885 kernel: acpiphp: Slot [29] registered Mar 19 11:33:05.169902 kernel: acpiphp: Slot [30] registered Mar 19 11:33:05.169920 kernel: acpiphp: Slot [31] registered Mar 19 11:33:05.169937 kernel: PCI host bridge to bus 0000:00 Mar 19 11:33:05.170157 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 19 11:33:05.170419 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 19 11:33:05.170614 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 19 11:33:05.170811 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 19 11:33:05.171059 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 19 11:33:05.171319 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 19 11:33:05.171551 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 19 11:33:05.171840 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 19 11:33:05.172066 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 19 11:33:05.176602 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 19 11:33:05.176844 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 19 11:33:05.177047 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 19 11:33:05.177250 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 19 11:33:05.177509 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 19 11:33:05.177747 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 19 11:33:05.177960 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 19 11:33:05.178174 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 19 11:33:05.181532 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 19 11:33:05.181806 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 19 11:33:05.182036 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 19 11:33:05.182249 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 19 11:33:05.182474 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 19 11:33:05.182671 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 19 11:33:05.182699 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 19 11:33:05.182719 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 19 11:33:05.182743 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 19 11:33:05.182786 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 19 11:33:05.182833 kernel: iommu: Default domain type: Translated Mar 19 11:33:05.182892 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 19 11:33:05.182916 kernel: efivars: Registered efivars operations Mar 19 11:33:05.182938 kernel: vgaarb: loaded Mar 19 11:33:05.182958 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 19 11:33:05.182977 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:33:05.182995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:33:05.183012 kernel: pnp: PnP ACPI init Mar 19 11:33:05.183620 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 19 11:33:05.183675 kernel: pnp: PnP ACPI: found 1 devices Mar 19 11:33:05.183696 kernel: NET: Registered PF_INET protocol family Mar 19 11:33:05.183716 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:33:05.183735 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:33:05.183756 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:33:05.183774 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:33:05.183793 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:33:05.183812 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:33:05.183831 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:33:05.183862 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:33:05.183882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:33:05.183900 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:33:05.183919 kernel: kvm [1]: HYP mode not available Mar 19 11:33:05.183937 kernel: Initialise system trusted keyrings Mar 19 11:33:05.183955 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:33:05.183973 kernel: Key type asymmetric registered Mar 19 11:33:05.183991 kernel: Asymmetric key parser 'x509' registered Mar 19 11:33:05.184009 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 19 11:33:05.184032 kernel: io scheduler mq-deadline registered Mar 19 11:33:05.184050 kernel: io scheduler kyber registered Mar 19 11:33:05.184068 kernel: io scheduler bfq registered Mar 19 11:33:05.185479 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 19 11:33:05.185527 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 19 11:33:05.185547 kernel: ACPI: button: Power Button [PWRB] Mar 19 11:33:05.185566 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 19 11:33:05.185585 kernel: ACPI: button: Sleep Button [SLPB] Mar 19 11:33:05.185634 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:33:05.185655 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 19 11:33:05.185899 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 19 11:33:05.185926 kernel: printk: console [ttyS0] disabled Mar 19 11:33:05.185945 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 19 11:33:05.185963 kernel: printk: console [ttyS0] enabled Mar 19 11:33:05.185981 kernel: printk: bootconsole [uart0] disabled Mar 19 11:33:05.185999 kernel: thunder_xcv, ver 1.0 Mar 19 11:33:05.186016 kernel: thunder_bgx, ver 1.0 Mar 19 11:33:05.186034 kernel: nicpf, ver 1.0 Mar 19 11:33:05.186058 kernel: nicvf, ver 1.0 Mar 19 11:33:05.186271 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 19 11:33:05.187150 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-19T11:33:04 UTC (1742383984) Mar 19 11:33:05.187184 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 19 11:33:05.187203 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 19 11:33:05.187222 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 19 11:33:05.187240 kernel: watchdog: Hard watchdog permanently disabled Mar 19 11:33:05.187265 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:33:05.187283 kernel: Segment Routing with IPv6 Mar 19 11:33:05.187336 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:33:05.187360 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:33:05.187378 kernel: Key type dns_resolver registered Mar 19 11:33:05.187396 kernel: registered taskstats version 1 Mar 19 11:33:05.187413 kernel: Loading compiled-in X.509 certificates Mar 19 11:33:05.187432 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 36392d496708ee63c4af5364493015d5256162ff' Mar 19 11:33:05.187450 kernel: Key type .fscrypt registered Mar 19 11:33:05.187468 kernel: Key type fscrypt-provisioning registered Mar 19 11:33:05.187492 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:33:05.187510 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:33:05.187528 kernel: ima: No architecture policies found Mar 19 11:33:05.187546 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 19 11:33:05.187563 kernel: clk: Disabling unused clocks Mar 19 11:33:05.187581 kernel: Freeing unused kernel memory: 38336K Mar 19 11:33:05.187598 kernel: Run /init as init process Mar 19 11:33:05.187616 kernel: with arguments: Mar 19 11:33:05.187633 kernel: /init Mar 19 11:33:05.187656 kernel: with environment: Mar 19 11:33:05.187673 kernel: HOME=/ Mar 19 11:33:05.187691 kernel: TERM=linux Mar 19 11:33:05.187709 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:33:05.187729 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:33:05.187753 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:33:05.187774 systemd[1]: Detected virtualization amazon. Mar 19 11:33:05.187798 systemd[1]: Detected architecture arm64. Mar 19 11:33:05.187817 systemd[1]: Running in initrd. Mar 19 11:33:05.187836 systemd[1]: No hostname configured, using default hostname. Mar 19 11:33:05.187856 systemd[1]: Hostname set to . Mar 19 11:33:05.188113 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:33:05.188134 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:33:05.188154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:33:05.188174 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:33:05.188195 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:33:05.188223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:33:05.188243 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:33:05.188264 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:33:05.188286 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:33:05.188337 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:33:05.188359 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:33:05.188385 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:33:05.188405 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:33:05.188424 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:33:05.188444 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:33:05.188463 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:33:05.188483 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:33:05.188502 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:33:05.188522 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:33:05.188541 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:33:05.188566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:33:05.188587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:33:05.188607 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:33:05.188626 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:33:05.188645 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:33:05.188665 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:33:05.188684 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:33:05.188703 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:33:05.188727 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:33:05.188747 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:33:05.188767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:33:05.188830 systemd-journald[252]: Collecting audit messages is disabled. Mar 19 11:33:05.188879 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:33:05.188899 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:33:05.188920 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:33:05.188940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:33:05.188959 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:33:05.188983 systemd-journald[252]: Journal started Mar 19 11:33:05.189020 systemd-journald[252]: Runtime Journal (/run/log/journal/ec254cc7234c4897d23830e40c2015af) is 8M, max 75.3M, 67.3M free. Mar 19 11:33:05.152417 systemd-modules-load[253]: Inserted module 'overlay' Mar 19 11:33:05.206658 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:33:05.206702 kernel: Bridge firewalling registered Mar 19 11:33:05.200343 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 19 11:33:05.212663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:33:05.219392 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:33:05.244786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:33:05.250073 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:05.255665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:33:05.262839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:33:05.275993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:33:05.294759 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:33:05.299786 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:33:05.301568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:33:05.340400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:33:05.358158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:33:05.382692 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:33:05.399474 systemd-resolved[278]: Positive Trust Anchors: Mar 19 11:33:05.399508 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:33:05.399570 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:33:05.443708 dracut-cmdline[291]: dracut-dracut-053 Mar 19 11:33:05.450901 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=41cc5bdd62754423bbb4bbec0e0356d6a1ab5d0ac0f2396a30318f9fb189e7eb Mar 19 11:33:05.602343 kernel: SCSI subsystem initialized Mar 19 11:33:05.610345 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:33:05.622344 kernel: iscsi: registered transport (tcp) Mar 19 11:33:05.645727 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:33:05.645804 kernel: QLogic iSCSI HBA Driver Mar 19 11:33:05.675377 kernel: random: crng init done Mar 19 11:33:05.675662 systemd-resolved[278]: Defaulting to hostname 'linux'. Mar 19 11:33:05.679550 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:33:05.689887 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:33:05.741355 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:33:05.753642 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:33:05.784498 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:33:05.784588 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:33:05.786237 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:33:05.851353 kernel: raid6: neonx8 gen() 6609 MB/s Mar 19 11:33:05.868338 kernel: raid6: neonx4 gen() 6578 MB/s Mar 19 11:33:05.885336 kernel: raid6: neonx2 gen() 5458 MB/s Mar 19 11:33:05.902343 kernel: raid6: neonx1 gen() 3959 MB/s Mar 19 11:33:05.919415 kernel: raid6: int64x8 gen() 3622 MB/s Mar 19 11:33:05.936365 kernel: raid6: int64x4 gen() 3552 MB/s Mar 19 11:33:05.953336 kernel: raid6: int64x2 gen() 3616 MB/s Mar 19 11:33:05.971087 kernel: raid6: int64x1 gen() 2765 MB/s Mar 19 11:33:05.971120 kernel: raid6: using algorithm neonx8 gen() 6609 MB/s Mar 19 11:33:05.989077 kernel: raid6: .... xor() 4690 MB/s, rmw enabled Mar 19 11:33:05.989116 kernel: raid6: using neon recovery algorithm Mar 19 11:33:05.996340 kernel: xor: measuring software checksum speed Mar 19 11:33:05.996403 kernel: 8regs : 11971 MB/sec Mar 19 11:33:05.998334 kernel: 32regs : 11865 MB/sec Mar 19 11:33:06.000337 kernel: arm64_neon : 8974 MB/sec Mar 19 11:33:06.000370 kernel: xor: using function: 8regs (11971 MB/sec) Mar 19 11:33:06.082388 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:33:06.101387 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:33:06.115603 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:33:06.157996 systemd-udevd[473]: Using default interface naming scheme 'v255'. Mar 19 11:33:06.167557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:33:06.182628 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:33:06.212424 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Mar 19 11:33:06.266980 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:33:06.278754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:33:06.403176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:33:06.422727 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:33:06.460043 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:33:06.464838 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:33:06.478735 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:33:06.485201 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:33:06.496630 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:33:06.539430 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:33:06.625043 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 19 11:33:06.625112 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 19 11:33:06.647281 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 19 11:33:06.647570 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 19 11:33:06.647806 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:69:68:8b:93:65 Mar 19 11:33:06.652786 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 19 11:33:06.650391 (udev-worker)[520]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:33:06.661225 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 19 11:33:06.652387 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:33:06.652654 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:33:06.672643 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 19 11:33:06.673816 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:33:06.682588 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 11:33:06.682627 kernel: GPT:9289727 != 16777215 Mar 19 11:33:06.682651 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 11:33:06.683506 kernel: GPT:9289727 != 16777215 Mar 19 11:33:06.685177 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 11:33:06.686677 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 19 11:33:06.689134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:33:06.689503 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:06.696937 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:33:06.711862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:33:06.720796 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:33:06.744417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:06.761683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:33:06.806456 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:33:06.843346 kernel: BTRFS: device fsid 7c80927c-98c3-4e81-a933-b7f5e1234bd2 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (528) Mar 19 11:33:06.860357 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (530) Mar 19 11:33:06.963451 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 19 11:33:06.991119 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 19 11:33:07.032450 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 19 11:33:07.035867 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 19 11:33:07.078749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 19 11:33:07.095599 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:33:07.110791 disk-uuid[663]: Primary Header is updated. Mar 19 11:33:07.110791 disk-uuid[663]: Secondary Entries is updated. Mar 19 11:33:07.110791 disk-uuid[663]: Secondary Header is updated. Mar 19 11:33:07.124350 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 19 11:33:07.135487 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 19 11:33:08.144778 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 19 11:33:08.147775 disk-uuid[664]: The operation has completed successfully. Mar 19 11:33:08.339721 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:33:08.339921 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:33:08.456598 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:33:08.467813 sh[922]: Success Mar 19 11:33:08.486648 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 19 11:33:08.594879 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:33:08.607513 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:33:08.615623 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:33:08.650281 kernel: BTRFS info (device dm-0): first mount of filesystem 7c80927c-98c3-4e81-a933-b7f5e1234bd2 Mar 19 11:33:08.650362 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:33:08.650389 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:33:08.653249 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:33:08.653297 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:33:08.754352 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 19 11:33:08.766838 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:33:08.767391 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:33:08.780716 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:33:08.783650 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:33:08.827342 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:08.830583 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:33:08.830670 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 19 11:33:08.838634 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 19 11:33:08.860252 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:33:08.865519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:08.879384 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:33:08.894883 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:33:09.005142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:33:09.028648 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:33:09.081409 systemd-networkd[1116]: lo: Link UP Mar 19 11:33:09.081431 systemd-networkd[1116]: lo: Gained carrier Mar 19 11:33:09.084241 systemd-networkd[1116]: Enumeration completed Mar 19 11:33:09.084945 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:33:09.084953 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:33:09.085859 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:33:09.091962 systemd[1]: Reached target network.target - Network. Mar 19 11:33:09.095618 systemd-networkd[1116]: eth0: Link UP Mar 19 11:33:09.095625 systemd-networkd[1116]: eth0: Gained carrier Mar 19 11:33:09.095643 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:33:09.127464 systemd-networkd[1116]: eth0: DHCPv4 address 172.31.16.168/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 19 11:33:09.242392 ignition[1034]: Ignition 2.20.0 Mar 19 11:33:09.242448 ignition[1034]: Stage: fetch-offline Mar 19 11:33:09.246349 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:09.246393 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:09.249108 ignition[1034]: Ignition finished successfully Mar 19 11:33:09.254389 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:33:09.269704 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 19 11:33:09.294347 ignition[1125]: Ignition 2.20.0 Mar 19 11:33:09.294378 ignition[1125]: Stage: fetch Mar 19 11:33:09.295359 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:09.295628 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:09.295839 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:09.320910 ignition[1125]: PUT result: OK Mar 19 11:33:09.324452 ignition[1125]: parsed url from cmdline: "" Mar 19 11:33:09.324472 ignition[1125]: no config URL provided Mar 19 11:33:09.324490 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:33:09.324519 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:33:09.324555 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:09.333959 ignition[1125]: PUT result: OK Mar 19 11:33:09.334070 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 19 11:33:09.336062 ignition[1125]: GET result: OK Mar 19 11:33:09.336589 ignition[1125]: parsing config with SHA512: 05a5bbff07e720b15254252e5b655a4830fb2419e77451f5b7d8fd50d3b8aea4ab7087aeff50c0ccf638daefd853741ed8d966eea81975d4ef0d5a8fbb0fdf1a Mar 19 11:33:09.349153 unknown[1125]: fetched base config from "system" Mar 19 11:33:09.349175 unknown[1125]: fetched base config from "system" Mar 19 11:33:09.349189 unknown[1125]: fetched user config from "aws" Mar 19 11:33:09.353527 ignition[1125]: fetch: fetch complete Mar 19 11:33:09.353551 ignition[1125]: fetch: fetch passed Mar 19 11:33:09.353690 ignition[1125]: Ignition finished successfully Mar 19 11:33:09.364572 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 19 11:33:09.383718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:33:09.412289 ignition[1132]: Ignition 2.20.0 Mar 19 11:33:09.413122 ignition[1132]: Stage: kargs Mar 19 11:33:09.413909 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:09.413937 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:09.414112 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:09.418048 ignition[1132]: PUT result: OK Mar 19 11:33:09.431530 ignition[1132]: kargs: kargs passed Mar 19 11:33:09.431640 ignition[1132]: Ignition finished successfully Mar 19 11:33:09.437281 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:33:09.457765 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:33:09.485865 ignition[1139]: Ignition 2.20.0 Mar 19 11:33:09.485887 ignition[1139]: Stage: disks Mar 19 11:33:09.487131 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:09.487161 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:09.487388 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:09.491121 ignition[1139]: PUT result: OK Mar 19 11:33:09.504175 ignition[1139]: disks: disks passed Mar 19 11:33:09.504746 ignition[1139]: Ignition finished successfully Mar 19 11:33:09.509566 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:33:09.517564 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:33:09.520792 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:33:09.526203 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:33:09.528568 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:33:09.531056 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:33:09.551492 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:33:09.599417 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 19 11:33:09.604671 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:33:09.656528 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:33:09.751497 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 45bb9a4a-80dc-4ce4-9ca9-c4944d8ff0e6 r/w with ordered data mode. Quota mode: none. Mar 19 11:33:09.752615 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:33:09.758682 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:33:09.790564 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:33:09.800790 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:33:09.804702 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 11:33:09.804798 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:33:09.828915 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1167) Mar 19 11:33:09.804848 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:33:09.835927 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:09.835966 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:33:09.835992 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 19 11:33:09.842449 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:33:09.852983 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 19 11:33:09.854936 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:33:09.868534 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:33:10.200922 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:33:10.220594 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:33:10.231590 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:33:10.243115 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:33:10.586773 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:33:10.599506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:33:10.608577 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:33:10.631343 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:10.647089 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:33:10.671656 ignition[1280]: INFO : Ignition 2.20.0 Mar 19 11:33:10.676809 ignition[1280]: INFO : Stage: mount Mar 19 11:33:10.676809 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:10.676809 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:10.676809 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:10.674513 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:33:10.692008 ignition[1280]: INFO : PUT result: OK Mar 19 11:33:10.696433 ignition[1280]: INFO : mount: mount passed Mar 19 11:33:10.698351 ignition[1280]: INFO : Ignition finished successfully Mar 19 11:33:10.702576 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:33:10.713502 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:33:10.727439 systemd-networkd[1116]: eth0: Gained IPv6LL Mar 19 11:33:10.738763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:33:10.774339 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1291) Mar 19 11:33:10.778675 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eeeb6da4-27b1-474b-8015-c667e85f7b18 Mar 19 11:33:10.778724 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 19 11:33:10.778751 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 19 11:33:10.785343 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 19 11:33:10.788915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:33:10.823027 ignition[1308]: INFO : Ignition 2.20.0 Mar 19 11:33:10.823027 ignition[1308]: INFO : Stage: files Mar 19 11:33:10.828403 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:10.828403 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:10.828403 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:10.828403 ignition[1308]: INFO : PUT result: OK Mar 19 11:33:10.845463 ignition[1308]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:33:10.850267 ignition[1308]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:33:10.850267 ignition[1308]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:33:10.883139 ignition[1308]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:33:10.887165 ignition[1308]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:33:10.891274 unknown[1308]: wrote ssh authorized keys file for user: core Mar 19 11:33:10.893743 ignition[1308]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:33:10.906381 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 19 11:33:10.906381 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 19 11:33:11.025151 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:33:12.613635 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 19 11:33:12.613635 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:33:12.622703 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 19 11:33:13.190006 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 19 11:33:13.416083 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 19 11:33:13.416083 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:33:13.424908 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 19 11:33:13.849738 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 19 11:33:14.293478 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 19 11:33:14.293478 ignition[1308]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 19 11:33:14.316251 ignition[1308]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:33:14.321125 ignition[1308]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:33:14.321125 ignition[1308]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 19 11:33:14.321125 ignition[1308]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:33:14.321125 ignition[1308]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:33:14.321125 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:33:14.321125 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:33:14.321125 ignition[1308]: INFO : files: files passed Mar 19 11:33:14.321125 ignition[1308]: INFO : Ignition finished successfully Mar 19 11:33:14.352606 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:33:14.373806 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:33:14.381628 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:33:14.390774 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:33:14.393110 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:33:14.425635 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:33:14.425635 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:33:14.435621 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:33:14.441673 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:33:14.449101 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:33:14.459623 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:33:14.511333 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:33:14.513357 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:33:14.522166 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:33:14.524344 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:33:14.526475 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:33:14.528132 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:33:14.565459 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:33:14.581079 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:33:14.603003 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:33:14.609006 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:33:14.611785 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:33:14.613793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:33:14.614038 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:33:14.617051 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:33:14.630611 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:33:14.632899 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:33:14.635709 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:33:14.638654 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:33:14.641433 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:33:14.643980 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:33:14.647044 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:33:14.649551 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:33:14.652039 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:33:14.654047 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:33:14.654288 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:33:14.657134 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:33:14.679549 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:33:14.682662 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:33:14.696711 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:33:14.699776 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:33:14.700014 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:33:14.702886 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:33:14.703111 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:33:14.706295 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:33:14.706537 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:33:14.733771 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:33:14.739289 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:33:14.743092 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:33:14.743423 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:33:14.746640 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:33:14.746880 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:33:14.790393 ignition[1361]: INFO : Ignition 2.20.0 Mar 19 11:33:14.790393 ignition[1361]: INFO : Stage: umount Mar 19 11:33:14.795398 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:33:14.795398 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 19 11:33:14.795398 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 19 11:33:14.803449 ignition[1361]: INFO : PUT result: OK Mar 19 11:33:14.804538 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:33:14.811593 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:33:14.811843 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:33:14.827233 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:33:14.843179 ignition[1361]: INFO : umount: umount passed Mar 19 11:33:14.843179 ignition[1361]: INFO : Ignition finished successfully Mar 19 11:33:14.827502 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:33:14.834852 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:33:14.835043 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:33:14.840066 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:33:14.840250 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:33:14.843275 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:33:14.844661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:33:14.847976 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 19 11:33:14.848076 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 19 11:33:14.851200 systemd[1]: Stopped target network.target - Network. Mar 19 11:33:14.853463 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:33:14.853597 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:33:14.860408 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:33:14.862481 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:33:14.890489 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:33:14.893181 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:33:14.895062 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:33:14.897068 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:33:14.897156 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:33:14.899238 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:33:14.899338 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:33:14.901466 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:33:14.901593 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:33:14.903754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:33:14.903872 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:33:14.906571 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:33:14.906694 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:33:14.909851 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:33:14.912773 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:33:14.929600 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:33:14.930223 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:33:14.975876 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:33:14.976691 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:33:14.977359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:33:15.009214 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:33:15.011968 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:33:15.012750 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:33:15.029781 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:33:15.033367 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:33:15.033981 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:33:15.045519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:33:15.045667 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:33:15.053699 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:33:15.053819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:33:15.056240 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:33:15.056392 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:33:15.069951 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:33:15.077439 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:33:15.077910 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:33:15.096000 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:33:15.096279 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:33:15.102802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:33:15.102951 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:33:15.110109 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:33:15.110193 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:33:15.112902 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:33:15.113001 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:33:15.116116 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:33:15.116209 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:33:15.137149 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:33:15.137267 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:33:15.155736 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:33:15.160375 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:33:15.160513 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:33:15.167948 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 19 11:33:15.168059 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:33:15.181109 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:33:15.181220 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:33:15.184231 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:33:15.184357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:15.209070 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:33:15.209211 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:33:15.210051 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:33:15.210485 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:33:15.225067 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:33:15.225260 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:33:15.230862 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:33:15.253703 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:33:15.275447 systemd[1]: Switching root. Mar 19 11:33:15.327481 systemd-journald[252]: Journal stopped Mar 19 11:33:18.059394 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 19 11:33:18.059533 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:33:18.059582 kernel: SELinux: policy capability open_perms=1 Mar 19 11:33:18.059625 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:33:18.059663 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:33:18.059695 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:33:18.059724 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:33:18.059752 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:33:18.059779 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:33:18.059818 kernel: audit: type=1403 audit(1742383995.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:33:18.059852 systemd[1]: Successfully loaded SELinux policy in 92.691ms. Mar 19 11:33:18.059904 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.876ms. Mar 19 11:33:18.059942 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:33:18.059978 systemd[1]: Detected virtualization amazon. Mar 19 11:33:18.060008 systemd[1]: Detected architecture arm64. Mar 19 11:33:18.060038 systemd[1]: Detected first boot. Mar 19 11:33:18.060069 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:33:18.060098 zram_generator::config[1406]: No configuration found. Mar 19 11:33:18.060132 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:33:18.060164 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:33:18.060207 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:33:18.060245 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:33:18.060276 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:33:18.062387 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:33:18.062435 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:33:18.062468 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:33:18.062499 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:33:18.062542 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:33:18.062574 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:33:18.062606 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:33:18.062646 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:33:18.062676 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:33:18.062705 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:33:18.062737 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:33:18.062767 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:33:18.062796 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:33:18.062827 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:33:18.062857 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:33:18.062893 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 19 11:33:18.062922 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:33:18.062952 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:33:18.062982 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:33:18.063015 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:33:18.063047 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:33:18.063076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:33:18.063106 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:33:18.063141 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:33:18.063172 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:33:18.063201 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:33:18.063233 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:33:18.063262 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:33:18.063291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:33:18.063353 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:33:18.063386 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:33:18.063417 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:33:18.063454 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:33:18.063486 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:33:18.063515 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:33:18.063543 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:33:18.063575 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:33:18.063604 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:33:18.063636 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:33:18.063668 systemd[1]: Reached target machines.target - Containers. Mar 19 11:33:18.063697 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:33:18.063732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:33:18.063761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:33:18.063789 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:33:18.063818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:33:18.063848 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:33:18.063878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:33:18.063906 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:33:18.063934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:33:18.063969 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:33:18.064002 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:33:18.064034 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:33:18.064065 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:33:18.064094 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:33:18.064129 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:33:18.064159 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:33:18.064187 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:33:18.064216 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:33:18.064253 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:33:18.064284 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:33:18.068447 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:33:18.068508 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:33:18.068542 systemd[1]: Stopped verity-setup.service. Mar 19 11:33:18.068571 kernel: loop: module loaded Mar 19 11:33:18.068604 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:33:18.068635 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:33:18.068676 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:33:18.068704 kernel: fuse: init (API version 7.39) Mar 19 11:33:18.068734 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:33:18.068764 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:33:18.068798 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:33:18.068829 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:33:18.068858 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:33:18.068886 kernel: ACPI: bus type drm_connector registered Mar 19 11:33:18.068914 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:33:18.068943 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:33:18.068973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:33:18.069009 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:33:18.069038 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:33:18.069069 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:33:18.069100 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:33:18.069130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:33:18.069162 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:33:18.069191 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:33:18.069290 systemd-journald[1500]: Collecting audit messages is disabled. Mar 19 11:33:18.069390 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:33:18.069424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:33:18.069452 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:33:18.069484 systemd-journald[1500]: Journal started Mar 19 11:33:18.069534 systemd-journald[1500]: Runtime Journal (/run/log/journal/ec254cc7234c4897d23830e40c2015af) is 8M, max 75.3M, 67.3M free. Mar 19 11:33:17.375783 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:33:18.074487 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:33:17.391680 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 19 11:33:17.392677 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:33:18.079271 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:33:18.086677 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:33:18.093294 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:33:18.129226 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:33:18.142652 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:33:18.157117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:33:18.165857 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:33:18.165937 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:33:18.174500 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:33:18.191646 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:33:18.207127 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:33:18.211930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:33:18.228766 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:33:18.236154 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:33:18.239959 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:33:18.244994 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:33:18.249909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:33:18.262968 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:33:18.281248 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:33:18.292804 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:33:18.304880 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:33:18.310433 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:33:18.315890 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:33:18.322162 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:33:18.348783 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:33:18.358214 systemd-journald[1500]: Time spent on flushing to /var/log/journal/ec254cc7234c4897d23830e40c2015af is 150.031ms for 927 entries. Mar 19 11:33:18.358214 systemd-journald[1500]: System Journal (/var/log/journal/ec254cc7234c4897d23830e40c2015af) is 8M, max 195.6M, 187.6M free. Mar 19 11:33:18.548654 systemd-journald[1500]: Received client request to flush runtime journal. Mar 19 11:33:18.548752 kernel: loop0: detected capacity change from 0 to 113512 Mar 19 11:33:18.548806 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:33:18.360064 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:33:18.362911 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:33:18.387230 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:33:18.428653 udevadm[1548]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 19 11:33:18.446716 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:33:18.550004 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:33:18.553499 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:33:18.559801 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:33:18.587370 kernel: loop1: detected capacity change from 0 to 123192 Mar 19 11:33:18.589033 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Mar 19 11:33:18.589057 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Mar 19 11:33:18.604523 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:33:18.627599 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:33:18.708706 kernel: loop2: detected capacity change from 0 to 53784 Mar 19 11:33:18.725062 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:33:18.750549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:33:18.777722 kernel: loop3: detected capacity change from 0 to 201592 Mar 19 11:33:18.812599 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Mar 19 11:33:18.813205 systemd-tmpfiles[1566]: ACLs are not supported, ignoring. Mar 19 11:33:18.831603 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:33:19.082606 kernel: loop4: detected capacity change from 0 to 113512 Mar 19 11:33:19.104887 kernel: loop5: detected capacity change from 0 to 123192 Mar 19 11:33:19.137529 kernel: loop6: detected capacity change from 0 to 53784 Mar 19 11:33:19.166367 kernel: loop7: detected capacity change from 0 to 201592 Mar 19 11:33:19.205804 (sd-merge)[1571]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 19 11:33:19.207067 (sd-merge)[1571]: Merged extensions into '/usr'. Mar 19 11:33:19.217016 systemd[1]: Reload requested from client PID 1542 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:33:19.217058 systemd[1]: Reloading... Mar 19 11:33:19.510350 zram_generator::config[1599]: No configuration found. Mar 19 11:33:19.880795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:33:20.051134 systemd[1]: Reloading finished in 832 ms. Mar 19 11:33:20.086407 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:33:20.092009 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:33:20.110035 systemd[1]: Starting ensure-sysext.service... Mar 19 11:33:20.115698 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:33:20.124495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:33:20.153608 systemd[1]: Reload requested from client PID 1651 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:33:20.153643 systemd[1]: Reloading... Mar 19 11:33:20.194286 ldconfig[1537]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:33:20.235661 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:33:20.239955 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:33:20.244751 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:33:20.249526 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Mar 19 11:33:20.249718 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Mar 19 11:33:20.265372 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:33:20.267609 systemd-tmpfiles[1652]: Skipping /boot Mar 19 11:33:20.272820 systemd-udevd[1653]: Using default interface naming scheme 'v255'. Mar 19 11:33:20.332219 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:33:20.334575 systemd-tmpfiles[1652]: Skipping /boot Mar 19 11:33:20.439352 zram_generator::config[1685]: No configuration found. Mar 19 11:33:20.605583 (udev-worker)[1705]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:33:20.916124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:33:20.966436 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1706) Mar 19 11:33:21.123701 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 19 11:33:21.124004 systemd[1]: Reloading finished in 969 ms. Mar 19 11:33:21.146241 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:33:21.162618 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:33:21.198848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:33:21.266745 systemd[1]: Finished ensure-sysext.service. Mar 19 11:33:21.299740 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:33:21.333013 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 19 11:33:21.342706 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:33:21.355630 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:33:21.358480 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:33:21.362993 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:33:21.371505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:33:21.379703 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:33:21.386743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:33:21.395675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:33:21.396121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:33:21.401937 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:33:21.410502 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:33:21.436004 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:33:21.452472 lvm[1853]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:33:21.462820 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:33:21.476916 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:33:21.487770 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:33:21.493827 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:33:21.497165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:33:21.510774 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:33:21.513844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:33:21.524939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:33:21.527791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:33:21.535679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:33:21.537480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:33:21.541646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:33:21.559950 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:33:21.562846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:33:21.564404 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:33:21.593054 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:33:21.621749 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:33:21.627483 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:33:21.632425 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:33:21.653858 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:33:21.657348 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:33:21.665438 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:33:21.675261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:33:21.692772 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:33:21.701380 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:33:21.726474 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:33:21.734108 augenrules[1899]: No rules Mar 19 11:33:21.735570 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:33:21.737523 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:33:21.742110 lvm[1891]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:33:21.788187 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:33:21.795950 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:33:21.845833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:33:21.942835 systemd-networkd[1866]: lo: Link UP Mar 19 11:33:21.942852 systemd-networkd[1866]: lo: Gained carrier Mar 19 11:33:21.946942 systemd-networkd[1866]: Enumeration completed Mar 19 11:33:21.947188 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:33:21.948796 systemd-networkd[1866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:33:21.948818 systemd-networkd[1866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:33:21.951281 systemd-networkd[1866]: eth0: Link UP Mar 19 11:33:21.951867 systemd-networkd[1866]: eth0: Gained carrier Mar 19 11:33:21.952043 systemd-networkd[1866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:33:21.959693 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:33:21.963842 systemd-resolved[1867]: Positive Trust Anchors: Mar 19 11:33:21.963884 systemd-resolved[1867]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:33:21.963949 systemd-resolved[1867]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:33:21.967829 systemd-networkd[1866]: eth0: DHCPv4 address 172.31.16.168/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 19 11:33:21.968206 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:33:21.981931 systemd-resolved[1867]: Defaulting to hostname 'linux'. Mar 19 11:33:22.001492 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:33:22.006394 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:33:22.010861 systemd[1]: Reached target network.target - Network. Mar 19 11:33:22.014492 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:33:22.019496 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:33:22.022805 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:33:22.025969 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:33:22.029091 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:33:22.031682 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:33:22.035194 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:33:22.038457 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:33:22.038517 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:33:22.040651 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:33:22.044744 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:33:22.049607 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:33:22.057002 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:33:22.061622 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:33:22.064698 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:33:22.077045 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:33:22.080422 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:33:22.084658 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:33:22.087720 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:33:22.090224 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:33:22.092733 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:33:22.092788 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:33:22.099514 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:33:22.107723 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 19 11:33:22.118731 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:33:22.128742 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:33:22.143989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:33:22.148998 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:33:22.156698 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:33:22.165994 jq[1925]: false Mar 19 11:33:22.169698 systemd[1]: Started ntpd.service - Network Time Service. Mar 19 11:33:22.179529 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:33:22.190597 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 19 11:33:22.197179 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:33:22.203497 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:33:22.215176 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:33:22.217176 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:33:22.218907 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:33:22.221577 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:33:22.227257 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:33:22.232694 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:33:22.233196 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:33:22.288086 dbus-daemon[1924]: [system] SELinux support is enabled Mar 19 11:33:22.297790 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:33:22.310930 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:33:22.312446 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:33:22.315410 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:33:22.315454 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:33:22.322233 dbus-daemon[1924]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1866 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 19 11:33:22.333963 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 19 11:33:22.345617 extend-filesystems[1926]: Found loop4 Mar 19 11:33:22.345617 extend-filesystems[1926]: Found loop5 Mar 19 11:33:22.345617 extend-filesystems[1926]: Found loop6 Mar 19 11:33:22.345617 extend-filesystems[1926]: Found loop7 Mar 19 11:33:22.345617 extend-filesystems[1926]: Found nvme0n1 Mar 19 11:33:22.345617 extend-filesystems[1926]: Found nvme0n1p1 Mar 19 11:33:22.345617 extend-filesystems[1926]: Found nvme0n1p2 Mar 19 11:33:22.345617 extend-filesystems[1926]: Found nvme0n1p3 Mar 19 11:33:22.345617 extend-filesystems[1926]: Found usr Mar 19 11:33:22.345617 extend-filesystems[1926]: Found nvme0n1p4 Mar 19 11:33:22.414035 extend-filesystems[1926]: Found nvme0n1p6 Mar 19 11:33:22.414035 extend-filesystems[1926]: Found nvme0n1p7 Mar 19 11:33:22.414035 extend-filesystems[1926]: Found nvme0n1p9 Mar 19 11:33:22.414035 extend-filesystems[1926]: Checking size of /dev/nvme0n1p9 Mar 19 11:33:22.440667 jq[1938]: true Mar 19 11:33:22.361767 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 19 11:33:22.378633 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:33:22.379189 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:33:22.457538 update_engine[1937]: I20250319 11:33:22.442132 1937 main.cc:92] Flatcar Update Engine starting Mar 19 11:33:22.458159 tar[1944]: linux-arm64/LICENSE Mar 19 11:33:22.458159 tar[1944]: linux-arm64/helm Mar 19 11:33:22.443859 (ntainerd)[1949]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:33:22.493398 update_engine[1937]: I20250319 11:33:22.467872 1937 update_check_scheduler.cc:74] Next update check in 2m1s Mar 19 11:33:22.476739 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:33:22.477220 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:33:22.482224 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:33:22.506196 ntpd[1930]: ntpd 4.2.8p17@1.4004-o Wed Mar 19 09:45:36 UTC 2025 (1): Starting Mar 19 11:33:22.507596 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: ntpd 4.2.8p17@1.4004-o Wed Mar 19 09:45:36 UTC 2025 (1): Starting Mar 19 11:33:22.507596 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 19 11:33:22.507596 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: ---------------------------------------------------- Mar 19 11:33:22.500639 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:33:22.506255 ntpd[1930]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 19 11:33:22.506278 ntpd[1930]: ---------------------------------------------------- Mar 19 11:33:22.506298 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Mar 19 11:33:22.520344 extend-filesystems[1926]: Resized partition /dev/nvme0n1p9 Mar 19 11:33:22.518394 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 19 11:33:22.522586 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: ntp-4 is maintained by Network Time Foundation, Mar 19 11:33:22.522586 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 19 11:33:22.522586 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: corporation. Support and training for ntp-4 are Mar 19 11:33:22.522586 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: available at https://www.nwtime.org/support Mar 19 11:33:22.522586 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: ---------------------------------------------------- Mar 19 11:33:22.518414 ntpd[1930]: corporation. Support and training for ntp-4 are Mar 19 11:33:22.518431 ntpd[1930]: available at https://www.nwtime.org/support Mar 19 11:33:22.518450 ntpd[1930]: ---------------------------------------------------- Mar 19 11:33:22.531146 ntpd[1930]: proto: precision = 0.096 usec (-23) Mar 19 11:33:22.532481 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: proto: precision = 0.096 usec (-23) Mar 19 11:33:22.534703 ntpd[1930]: basedate set to 2025-03-07 Mar 19 11:33:22.534757 ntpd[1930]: gps base set to 2025-03-09 (week 2357) Mar 19 11:33:22.535014 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: basedate set to 2025-03-07 Mar 19 11:33:22.535014 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: gps base set to 2025-03-09 (week 2357) Mar 19 11:33:22.538347 jq[1954]: true Mar 19 11:33:22.571933 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: Listen normally on 3 eth0 172.31.16.168:123 Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: Listen normally on 4 lo [::1]:123 Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: bind(21) AF_INET6 fe80::469:68ff:fe8b:9365%2#123 flags 0x11 failed: Cannot assign requested address Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: unable to create socket on eth0 (5) for fe80::469:68ff:fe8b:9365%2#123 Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: failed to init interface for address fe80::469:68ff:fe8b:9365%2 Mar 19 11:33:22.571983 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: Listening on routing socket on fd #21 for interface updates Mar 19 11:33:22.556414 ntpd[1930]: Listen and drop on 0 v6wildcard [::]:123 Mar 19 11:33:22.572624 extend-filesystems[1972]: resize2fs 1.47.1 (20-May-2024) Mar 19 11:33:22.556522 ntpd[1930]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 19 11:33:22.556825 ntpd[1930]: Listen normally on 2 lo 127.0.0.1:123 Mar 19 11:33:22.556893 ntpd[1930]: Listen normally on 3 eth0 172.31.16.168:123 Mar 19 11:33:22.556971 ntpd[1930]: Listen normally on 4 lo [::1]:123 Mar 19 11:33:22.557059 ntpd[1930]: bind(21) AF_INET6 fe80::469:68ff:fe8b:9365%2#123 flags 0x11 failed: Cannot assign requested address Mar 19 11:33:22.557104 ntpd[1930]: unable to create socket on eth0 (5) for fe80::469:68ff:fe8b:9365%2#123 Mar 19 11:33:22.557133 ntpd[1930]: failed to init interface for address fe80::469:68ff:fe8b:9365%2 Mar 19 11:33:22.557196 ntpd[1930]: Listening on routing socket on fd #21 for interface updates Mar 19 11:33:22.595554 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 19 11:33:22.595632 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 19 11:33:22.596712 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 19 11:33:22.596712 ntpd[1930]: 19 Mar 11:33:22 ntpd[1930]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 19 11:33:22.629343 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 19 11:33:22.651172 extend-filesystems[1972]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 19 11:33:22.651172 extend-filesystems[1972]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 19 11:33:22.651172 extend-filesystems[1972]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 19 11:33:22.698553 extend-filesystems[1926]: Resized filesystem in /dev/nvme0n1p9 Mar 19 11:33:22.657777 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:33:22.658275 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:33:22.692882 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:33:22.740072 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 19 11:33:22.822694 coreos-metadata[1923]: Mar 19 11:33:22.821 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 19 11:33:22.827195 coreos-metadata[1923]: Mar 19 11:33:22.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 19 11:33:22.829176 coreos-metadata[1923]: Mar 19 11:33:22.828 INFO Fetch successful Mar 19 11:33:22.829176 coreos-metadata[1923]: Mar 19 11:33:22.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 19 11:33:22.831378 coreos-metadata[1923]: Mar 19 11:33:22.831 INFO Fetch successful Mar 19 11:33:22.831378 coreos-metadata[1923]: Mar 19 11:33:22.831 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 19 11:33:22.832927 coreos-metadata[1923]: Mar 19 11:33:22.832 INFO Fetch successful Mar 19 11:33:22.833196 coreos-metadata[1923]: Mar 19 11:33:22.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 19 11:33:22.843156 coreos-metadata[1923]: Mar 19 11:33:22.842 INFO Fetch successful Mar 19 11:33:22.843156 coreos-metadata[1923]: Mar 19 11:33:22.842 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 19 11:33:22.851527 coreos-metadata[1923]: Mar 19 11:33:22.851 INFO Fetch failed with 404: resource not found Mar 19 11:33:22.851527 coreos-metadata[1923]: Mar 19 11:33:22.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 19 11:33:22.852754 coreos-metadata[1923]: Mar 19 11:33:22.852 INFO Fetch successful Mar 19 11:33:22.852754 coreos-metadata[1923]: Mar 19 11:33:22.852 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 19 11:33:22.855777 coreos-metadata[1923]: Mar 19 11:33:22.853 INFO Fetch successful Mar 19 11:33:22.855777 coreos-metadata[1923]: Mar 19 11:33:22.855 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 19 11:33:22.859077 coreos-metadata[1923]: Mar 19 11:33:22.858 INFO Fetch successful Mar 19 11:33:22.859077 coreos-metadata[1923]: Mar 19 11:33:22.858 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 19 11:33:22.860073 coreos-metadata[1923]: Mar 19 11:33:22.859 INFO Fetch successful Mar 19 11:33:22.860073 coreos-metadata[1923]: Mar 19 11:33:22.860 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 19 11:33:22.863354 coreos-metadata[1923]: Mar 19 11:33:22.861 INFO Fetch successful Mar 19 11:33:22.890340 bash[2006]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:33:22.973644 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1687) Mar 19 11:33:22.974749 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:33:22.989968 systemd[1]: Starting sshkeys.service... Mar 19 11:33:23.012902 systemd-logind[1936]: Watching system buttons on /dev/input/event0 (Power Button) Mar 19 11:33:23.012939 systemd-logind[1936]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 19 11:33:23.013587 systemd-logind[1936]: New seat seat0. Mar 19 11:33:23.028994 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:33:23.036670 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 19 11:33:23.044221 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 19 11:33:23.052533 dbus-daemon[1924]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1951 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 19 11:33:23.061780 systemd[1]: Starting polkit.service - Authorization Manager... Mar 19 11:33:23.087045 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 19 11:33:23.098360 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 19 11:33:23.108488 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 19 11:33:23.111599 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:33:23.147655 polkitd[2025]: Started polkitd version 121 Mar 19 11:33:23.197875 polkitd[2025]: Loading rules from directory /etc/polkit-1/rules.d Mar 19 11:33:23.198131 polkitd[2025]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 19 11:33:23.206806 systemd-networkd[1866]: eth0: Gained IPv6LL Mar 19 11:33:23.217490 polkitd[2025]: Finished loading, compiling and executing 2 rules Mar 19 11:33:23.225553 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:33:23.227215 dbus-daemon[1924]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 19 11:33:23.229831 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:33:23.242866 polkitd[2025]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 19 11:33:23.254440 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 19 11:33:23.271015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:33:23.281935 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:33:23.285756 systemd[1]: Started polkit.service - Authorization Manager. Mar 19 11:33:23.423001 systemd-hostnamed[1951]: Hostname set to (transient) Mar 19 11:33:23.427712 systemd-resolved[1867]: System hostname changed to 'ip-172-31-16-168'. Mar 19 11:33:23.430782 locksmithd[1971]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:33:23.465942 containerd[1949]: time="2025-03-19T11:33:23.464512838Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: Initializing new seelog logger Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: New Seelog Logger Creation Complete Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: 2025/03/19 11:33:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: 2025/03/19 11:33:23 processing appconfig overrides Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: 2025/03/19 11:33:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: 2025/03/19 11:33:23 processing appconfig overrides Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: 2025/03/19 11:33:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:23.557961 amazon-ssm-agent[2051]: 2025/03/19 11:33:23 processing appconfig overrides Mar 19 11:33:23.564567 amazon-ssm-agent[2051]: 2025-03-19 11:33:23 INFO Proxy environment variables: Mar 19 11:33:23.568907 amazon-ssm-agent[2051]: 2025/03/19 11:33:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:23.568907 amazon-ssm-agent[2051]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 19 11:33:23.568907 amazon-ssm-agent[2051]: 2025/03/19 11:33:23 processing appconfig overrides Mar 19 11:33:23.574198 coreos-metadata[2028]: Mar 19 11:33:23.570 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 19 11:33:23.576903 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:33:23.588500 coreos-metadata[2028]: Mar 19 11:33:23.585 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 19 11:33:23.588500 coreos-metadata[2028]: Mar 19 11:33:23.587 INFO Fetch successful Mar 19 11:33:23.598657 coreos-metadata[2028]: Mar 19 11:33:23.588 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 19 11:33:23.598657 coreos-metadata[2028]: Mar 19 11:33:23.598 INFO Fetch successful Mar 19 11:33:23.600896 unknown[2028]: wrote ssh authorized keys file for user: core Mar 19 11:33:23.615550 sshd_keygen[1966]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:33:23.661546 amazon-ssm-agent[2051]: 2025-03-19 11:33:23 INFO https_proxy: Mar 19 11:33:23.761801 amazon-ssm-agent[2051]: 2025-03-19 11:33:23 INFO http_proxy: Mar 19 11:33:23.776526 update-ssh-keys[2102]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:33:23.781455 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 19 11:33:23.798570 systemd[1]: Finished sshkeys.service. Mar 19 11:33:23.822029 containerd[1949]: time="2025-03-19T11:33:23.821461923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:23.837802 containerd[1949]: time="2025-03-19T11:33:23.837699412Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:23.837802 containerd[1949]: time="2025-03-19T11:33:23.837779416Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:33:23.837963 containerd[1949]: time="2025-03-19T11:33:23.837818512Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:33:23.840345 containerd[1949]: time="2025-03-19T11:33:23.838175236Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:33:23.840345 containerd[1949]: time="2025-03-19T11:33:23.838242544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:23.847048 containerd[1949]: time="2025-03-19T11:33:23.846577204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:23.847048 containerd[1949]: time="2025-03-19T11:33:23.846735088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:23.847404 containerd[1949]: time="2025-03-19T11:33:23.847205356Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:23.847516 containerd[1949]: time="2025-03-19T11:33:23.847428928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:23.847574 containerd[1949]: time="2025-03-19T11:33:23.847500748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:23.849434 containerd[1949]: time="2025-03-19T11:33:23.847533184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:23.856535 containerd[1949]: time="2025-03-19T11:33:23.852071920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:23.856535 containerd[1949]: time="2025-03-19T11:33:23.855643840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:33:23.865520 containerd[1949]: time="2025-03-19T11:33:23.860046604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:33:23.865520 containerd[1949]: time="2025-03-19T11:33:23.860503576Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:33:23.865520 containerd[1949]: time="2025-03-19T11:33:23.864633124Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:33:23.866423 containerd[1949]: time="2025-03-19T11:33:23.866353096Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:33:23.867280 amazon-ssm-agent[2051]: 2025-03-19 11:33:23 INFO no_proxy: Mar 19 11:33:23.874828 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:33:23.886276 containerd[1949]: time="2025-03-19T11:33:23.886184260Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:33:23.886491 containerd[1949]: time="2025-03-19T11:33:23.886283716Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:33:23.886491 containerd[1949]: time="2025-03-19T11:33:23.886351192Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:33:23.886491 containerd[1949]: time="2025-03-19T11:33:23.886389892Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:33:23.886491 containerd[1949]: time="2025-03-19T11:33:23.886427872Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:33:23.887544 containerd[1949]: time="2025-03-19T11:33:23.886761712Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:33:23.891691 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:33:23.895519 containerd[1949]: time="2025-03-19T11:33:23.895451032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:33:23.895830 containerd[1949]: time="2025-03-19T11:33:23.895761076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:33:23.895830 containerd[1949]: time="2025-03-19T11:33:23.895818916Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:33:23.895962 containerd[1949]: time="2025-03-19T11:33:23.895862164Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:33:23.895962 containerd[1949]: time="2025-03-19T11:33:23.895894732Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:33:23.895962 containerd[1949]: time="2025-03-19T11:33:23.895925824Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:33:23.896088 containerd[1949]: time="2025-03-19T11:33:23.895978276Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:33:23.896088 containerd[1949]: time="2025-03-19T11:33:23.896013928Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:33:23.896088 containerd[1949]: time="2025-03-19T11:33:23.896049220Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:33:23.896088 containerd[1949]: time="2025-03-19T11:33:23.896079688Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:33:23.898328 containerd[1949]: time="2025-03-19T11:33:23.896109652Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:33:23.898328 containerd[1949]: time="2025-03-19T11:33:23.896137420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:33:23.898328 containerd[1949]: time="2025-03-19T11:33:23.896180392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.898328 containerd[1949]: time="2025-03-19T11:33:23.896211844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.898328 containerd[1949]: time="2025-03-19T11:33:23.896241064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.898328 containerd[1949]: time="2025-03-19T11:33:23.896272540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.899857 systemd[1]: Started sshd@0-172.31.16.168:22-139.178.68.195:33138.service - OpenSSH per-connection server daemon (139.178.68.195:33138). Mar 19 11:33:23.915466 containerd[1949]: time="2025-03-19T11:33:23.915391876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915597 containerd[1949]: time="2025-03-19T11:33:23.915477352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915597 containerd[1949]: time="2025-03-19T11:33:23.915512632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915597 containerd[1949]: time="2025-03-19T11:33:23.915546976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915597 containerd[1949]: time="2025-03-19T11:33:23.915582016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915807 containerd[1949]: time="2025-03-19T11:33:23.915629260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915807 containerd[1949]: time="2025-03-19T11:33:23.915665728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915807 containerd[1949]: time="2025-03-19T11:33:23.915696196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915807 containerd[1949]: time="2025-03-19T11:33:23.915727108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.915807 containerd[1949]: time="2025-03-19T11:33:23.915760192Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.915810016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.915844948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.915874036Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.916025188Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.916071208Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.916097992Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.916126828Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.916166572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.916201240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.916225936Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:33:23.917719 containerd[1949]: time="2025-03-19T11:33:23.916267984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:33:23.925207 containerd[1949]: time="2025-03-19T11:33:23.925021180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:33:23.925207 containerd[1949]: time="2025-03-19T11:33:23.925174132Z" level=info msg="Connect containerd service" Mar 19 11:33:23.926392 containerd[1949]: time="2025-03-19T11:33:23.925257784Z" level=info msg="using legacy CRI server" Mar 19 11:33:23.926392 containerd[1949]: time="2025-03-19T11:33:23.925279804Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:33:23.933440 containerd[1949]: time="2025-03-19T11:33:23.932750080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:33:23.940090 containerd[1949]: time="2025-03-19T11:33:23.940009228Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:33:23.944519 containerd[1949]: time="2025-03-19T11:33:23.944429164Z" level=info msg="Start subscribing containerd event" Mar 19 11:33:23.944639 containerd[1949]: time="2025-03-19T11:33:23.944534680Z" level=info msg="Start recovering state" Mar 19 11:33:23.944688 containerd[1949]: time="2025-03-19T11:33:23.944662360Z" level=info msg="Start event monitor" Mar 19 11:33:23.944734 containerd[1949]: time="2025-03-19T11:33:23.944691736Z" level=info msg="Start snapshots syncer" Mar 19 11:33:23.944734 containerd[1949]: time="2025-03-19T11:33:23.944713852Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:33:23.944940 containerd[1949]: time="2025-03-19T11:33:23.944732080Z" level=info msg="Start streaming server" Mar 19 11:33:23.952048 containerd[1949]: time="2025-03-19T11:33:23.950674576Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:33:23.952048 containerd[1949]: time="2025-03-19T11:33:23.950819800Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:33:23.951051 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:33:23.986336 amazon-ssm-agent[2051]: 2025-03-19 11:33:23 INFO Checking if agent identity type OnPrem can be assumed Mar 19 11:33:23.986511 containerd[1949]: time="2025-03-19T11:33:23.985936888Z" level=info msg="containerd successfully booted in 0.532188s" Mar 19 11:33:24.009522 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:33:24.011348 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:33:24.061075 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:33:24.084631 amazon-ssm-agent[2051]: 2025-03-19 11:33:23 INFO Checking if agent identity type EC2 can be assumed Mar 19 11:33:24.140383 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:33:24.156909 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:33:24.170841 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 19 11:33:24.173861 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:33:24.184353 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO Agent will take identity from EC2 Mar 19 11:33:24.193117 sshd[2144]: Accepted publickey for core from 139.178.68.195 port 33138 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:24.199035 sshd-session[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:24.234553 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:33:24.246822 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:33:24.284527 systemd-logind[1936]: New session 1 of user core. Mar 19 11:33:24.287231 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 19 11:33:24.296858 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:33:24.332969 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:33:24.354742 (systemd)[2168]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:33:24.363067 systemd-logind[1936]: New session c1 of user core. Mar 19 11:33:24.386430 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 19 11:33:24.488367 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 19 11:33:24.587347 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 19 11:33:24.687474 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 19 11:33:24.783405 tar[1944]: linux-arm64/README.md Mar 19 11:33:24.785571 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [amazon-ssm-agent] Starting Core Agent Mar 19 11:33:24.797508 systemd[2168]: Queued start job for default target default.target. Mar 19 11:33:24.801852 systemd[2168]: Created slice app.slice - User Application Slice. Mar 19 11:33:24.802110 systemd[2168]: Reached target paths.target - Paths. Mar 19 11:33:24.802220 systemd[2168]: Reached target timers.target - Timers. Mar 19 11:33:24.806570 systemd[2168]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:33:24.813658 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:33:24.845807 systemd[2168]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:33:24.846118 systemd[2168]: Reached target sockets.target - Sockets. Mar 19 11:33:24.846370 systemd[2168]: Reached target basic.target - Basic System. Mar 19 11:33:24.846466 systemd[2168]: Reached target default.target - Main User Target. Mar 19 11:33:24.846512 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:33:24.846525 systemd[2168]: Startup finished in 462ms. Mar 19 11:33:24.859646 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:33:24.886173 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 19 11:33:24.986546 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [Registrar] Starting registrar module Mar 19 11:33:25.032829 systemd[1]: Started sshd@1-172.31.16.168:22-139.178.68.195:33150.service - OpenSSH per-connection server daemon (139.178.68.195:33150). Mar 19 11:33:25.087157 amazon-ssm-agent[2051]: 2025-03-19 11:33:24 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 19 11:33:25.263693 sshd[2182]: Accepted publickey for core from 139.178.68.195 port 33150 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:25.268006 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:25.287692 systemd-logind[1936]: New session 2 of user core. Mar 19 11:33:25.293985 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:33:25.431356 sshd[2184]: Connection closed by 139.178.68.195 port 33150 Mar 19 11:33:25.432146 sshd-session[2182]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:25.444093 systemd[1]: sshd@1-172.31.16.168:22-139.178.68.195:33150.service: Deactivated successfully. Mar 19 11:33:25.450271 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 11:33:25.454211 systemd-logind[1936]: Session 2 logged out. Waiting for processes to exit. Mar 19 11:33:25.480729 systemd[1]: Started sshd@2-172.31.16.168:22-139.178.68.195:33152.service - OpenSSH per-connection server daemon (139.178.68.195:33152). Mar 19 11:33:25.500098 systemd-logind[1936]: Removed session 2. Mar 19 11:33:25.519789 ntpd[1930]: Listen normally on 6 eth0 [fe80::469:68ff:fe8b:9365%2]:123 Mar 19 11:33:25.520610 ntpd[1930]: 19 Mar 11:33:25 ntpd[1930]: Listen normally on 6 eth0 [fe80::469:68ff:fe8b:9365%2]:123 Mar 19 11:33:25.723468 sshd[2189]: Accepted publickey for core from 139.178.68.195 port 33152 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:25.725447 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:25.738430 systemd-logind[1936]: New session 3 of user core. Mar 19 11:33:25.744793 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:33:25.887494 sshd[2192]: Connection closed by 139.178.68.195 port 33152 Mar 19 11:33:25.888289 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:25.894484 systemd[1]: sshd@2-172.31.16.168:22-139.178.68.195:33152.service: Deactivated successfully. Mar 19 11:33:25.899393 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 11:33:25.901583 systemd-logind[1936]: Session 3 logged out. Waiting for processes to exit. Mar 19 11:33:25.904701 systemd-logind[1936]: Removed session 3. Mar 19 11:33:25.917444 amazon-ssm-agent[2051]: 2025-03-19 11:33:25 INFO [EC2Identity] EC2 registration was successful. Mar 19 11:33:25.944595 amazon-ssm-agent[2051]: 2025-03-19 11:33:25 INFO [CredentialRefresher] credentialRefresher has started Mar 19 11:33:25.944595 amazon-ssm-agent[2051]: 2025-03-19 11:33:25 INFO [CredentialRefresher] Starting credentials refresher loop Mar 19 11:33:25.944595 amazon-ssm-agent[2051]: 2025-03-19 11:33:25 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 19 11:33:26.018082 amazon-ssm-agent[2051]: 2025-03-19 11:33:25 INFO [CredentialRefresher] Next credential rotation will be in 31.458324347033333 minutes Mar 19 11:33:26.973076 amazon-ssm-agent[2051]: 2025-03-19 11:33:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 19 11:33:27.075859 amazon-ssm-agent[2051]: 2025-03-19 11:33:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2199) started Mar 19 11:33:27.176444 amazon-ssm-agent[2051]: 2025-03-19 11:33:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 19 11:33:27.358996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:33:27.362433 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:33:27.365422 systemd[1]: Startup finished in 1.076s (kernel) + 10.964s (initrd) + 11.669s (userspace) = 23.710s. Mar 19 11:33:27.382951 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:33:28.606129 kubelet[2214]: E0319 11:33:28.606024 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:33:28.610457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:33:28.610806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:33:28.611689 systemd[1]: kubelet.service: Consumed 1.339s CPU time, 250.3M memory peak. Mar 19 11:33:29.330090 systemd-resolved[1867]: Clock change detected. Flushing caches. Mar 19 11:33:35.740707 systemd[1]: Started sshd@3-172.31.16.168:22-139.178.68.195:46018.service - OpenSSH per-connection server daemon (139.178.68.195:46018). Mar 19 11:33:35.935704 sshd[2226]: Accepted publickey for core from 139.178.68.195 port 46018 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:35.938333 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:35.946847 systemd-logind[1936]: New session 4 of user core. Mar 19 11:33:35.959499 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:33:36.086682 sshd[2228]: Connection closed by 139.178.68.195 port 46018 Mar 19 11:33:36.086384 sshd-session[2226]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:36.094407 systemd-logind[1936]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:33:36.095621 systemd[1]: sshd@3-172.31.16.168:22-139.178.68.195:46018.service: Deactivated successfully. Mar 19 11:33:36.099286 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:33:36.101926 systemd-logind[1936]: Removed session 4. Mar 19 11:33:36.127678 systemd[1]: Started sshd@4-172.31.16.168:22-139.178.68.195:46020.service - OpenSSH per-connection server daemon (139.178.68.195:46020). Mar 19 11:33:36.305524 sshd[2234]: Accepted publickey for core from 139.178.68.195 port 46020 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:36.308074 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:36.317217 systemd-logind[1936]: New session 5 of user core. Mar 19 11:33:36.324546 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:33:36.442528 sshd[2236]: Connection closed by 139.178.68.195 port 46020 Mar 19 11:33:36.443385 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:36.450350 systemd[1]: sshd@4-172.31.16.168:22-139.178.68.195:46020.service: Deactivated successfully. Mar 19 11:33:36.454671 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:33:36.456887 systemd-logind[1936]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:33:36.458987 systemd-logind[1936]: Removed session 5. Mar 19 11:33:36.479678 systemd[1]: Started sshd@5-172.31.16.168:22-139.178.68.195:46024.service - OpenSSH per-connection server daemon (139.178.68.195:46024). Mar 19 11:33:36.656537 sshd[2242]: Accepted publickey for core from 139.178.68.195 port 46024 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:36.658886 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:36.667594 systemd-logind[1936]: New session 6 of user core. Mar 19 11:33:36.677458 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:33:36.801026 sshd[2244]: Connection closed by 139.178.68.195 port 46024 Mar 19 11:33:36.800902 sshd-session[2242]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:36.806542 systemd[1]: sshd@5-172.31.16.168:22-139.178.68.195:46024.service: Deactivated successfully. Mar 19 11:33:36.810813 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:33:36.814209 systemd-logind[1936]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:33:36.816679 systemd-logind[1936]: Removed session 6. Mar 19 11:33:36.848897 systemd[1]: Started sshd@6-172.31.16.168:22-139.178.68.195:46038.service - OpenSSH per-connection server daemon (139.178.68.195:46038). Mar 19 11:33:37.028272 sshd[2250]: Accepted publickey for core from 139.178.68.195 port 46038 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:37.030651 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:37.041543 systemd-logind[1936]: New session 7 of user core. Mar 19 11:33:37.053490 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:33:37.173350 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 19 11:33:37.174054 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:33:37.190467 sudo[2253]: pam_unix(sudo:session): session closed for user root Mar 19 11:33:37.214459 sshd[2252]: Connection closed by 139.178.68.195 port 46038 Mar 19 11:33:37.214263 sshd-session[2250]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:37.221326 systemd[1]: sshd@6-172.31.16.168:22-139.178.68.195:46038.service: Deactivated successfully. Mar 19 11:33:37.221831 systemd-logind[1936]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:33:37.225924 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:33:37.230476 systemd-logind[1936]: Removed session 7. Mar 19 11:33:37.258620 systemd[1]: Started sshd@7-172.31.16.168:22-139.178.68.195:46050.service - OpenSSH per-connection server daemon (139.178.68.195:46050). Mar 19 11:33:37.435845 sshd[2259]: Accepted publickey for core from 139.178.68.195 port 46050 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:37.438319 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:37.449278 systemd-logind[1936]: New session 8 of user core. Mar 19 11:33:37.451470 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:33:37.554066 sudo[2263]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 19 11:33:37.554856 sudo[2263]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:33:37.561703 sudo[2263]: pam_unix(sudo:session): session closed for user root Mar 19 11:33:37.571982 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 19 11:33:37.573132 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:33:37.592858 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:33:37.649218 augenrules[2285]: No rules Mar 19 11:33:37.650735 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:33:37.652311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:33:37.654494 sudo[2262]: pam_unix(sudo:session): session closed for user root Mar 19 11:33:37.677938 sshd[2261]: Connection closed by 139.178.68.195 port 46050 Mar 19 11:33:37.678466 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Mar 19 11:33:37.684823 systemd[1]: sshd@7-172.31.16.168:22-139.178.68.195:46050.service: Deactivated successfully. Mar 19 11:33:37.688879 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:33:37.690795 systemd-logind[1936]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:33:37.692761 systemd-logind[1936]: Removed session 8. Mar 19 11:33:37.727640 systemd[1]: Started sshd@8-172.31.16.168:22-139.178.68.195:46052.service - OpenSSH per-connection server daemon (139.178.68.195:46052). Mar 19 11:33:37.910422 sshd[2294]: Accepted publickey for core from 139.178.68.195 port 46052 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:33:37.912819 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:33:37.923540 systemd-logind[1936]: New session 9 of user core. Mar 19 11:33:37.935431 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:33:38.040247 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:33:38.040902 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:33:38.482839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:33:38.495342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:33:38.590705 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:33:38.603780 (dockerd)[2316]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:33:38.894665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:33:38.903891 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:33:38.967308 dockerd[2316]: time="2025-03-19T11:33:38.967123951Z" level=info msg="Starting up" Mar 19 11:33:38.990736 kubelet[2327]: E0319 11:33:38.990650 2327 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:33:38.998056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:33:38.998473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:33:39.000343 systemd[1]: kubelet.service: Consumed 303ms CPU time, 101.2M memory peak. Mar 19 11:33:39.082772 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3902988388-merged.mount: Deactivated successfully. Mar 19 11:33:39.115050 dockerd[2316]: time="2025-03-19T11:33:39.114635212Z" level=info msg="Loading containers: start." Mar 19 11:33:39.352239 kernel: Initializing XFRM netlink socket Mar 19 11:33:39.387231 (udev-worker)[2352]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:33:39.478589 systemd-networkd[1866]: docker0: Link UP Mar 19 11:33:39.516531 dockerd[2316]: time="2025-03-19T11:33:39.516366330Z" level=info msg="Loading containers: done." Mar 19 11:33:39.542661 dockerd[2316]: time="2025-03-19T11:33:39.541810770Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:33:39.542661 dockerd[2316]: time="2025-03-19T11:33:39.541980234Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:33:39.542661 dockerd[2316]: time="2025-03-19T11:33:39.542223858Z" level=info msg="Daemon has completed initialization" Mar 19 11:33:39.542974 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck992524923-merged.mount: Deactivated successfully. Mar 19 11:33:39.603496 dockerd[2316]: time="2025-03-19T11:33:39.603101839Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:33:39.603889 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:33:40.486087 containerd[1949]: time="2025-03-19T11:33:40.485674891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 19 11:33:41.077548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444244373.mount: Deactivated successfully. Mar 19 11:33:43.204231 containerd[1949]: time="2025-03-19T11:33:43.203249720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:43.205814 containerd[1949]: time="2025-03-19T11:33:43.205729328Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231950" Mar 19 11:33:43.208351 containerd[1949]: time="2025-03-19T11:33:43.208280552Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:43.215011 containerd[1949]: time="2025-03-19T11:33:43.214948712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:43.223108 containerd[1949]: time="2025-03-19T11:33:43.222873453Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 2.737133906s" Mar 19 11:33:43.223108 containerd[1949]: time="2025-03-19T11:33:43.222938457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 19 11:33:43.224112 containerd[1949]: time="2025-03-19T11:33:43.223997217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 19 11:33:45.849235 containerd[1949]: time="2025-03-19T11:33:45.848858546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:45.851052 containerd[1949]: time="2025-03-19T11:33:45.850966418Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530032" Mar 19 11:33:45.853464 containerd[1949]: time="2025-03-19T11:33:45.853386566Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:45.859538 containerd[1949]: time="2025-03-19T11:33:45.859440050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:45.861996 containerd[1949]: time="2025-03-19T11:33:45.861743186Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 2.637690229s" Mar 19 11:33:45.861996 containerd[1949]: time="2025-03-19T11:33:45.861809906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 19 11:33:45.863297 containerd[1949]: time="2025-03-19T11:33:45.862966742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 19 11:33:47.528233 containerd[1949]: time="2025-03-19T11:33:47.527838098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:47.530065 containerd[1949]: time="2025-03-19T11:33:47.529971890Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482561" Mar 19 11:33:47.532719 containerd[1949]: time="2025-03-19T11:33:47.532650374Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:47.538625 containerd[1949]: time="2025-03-19T11:33:47.538542746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:47.541219 containerd[1949]: time="2025-03-19T11:33:47.540643922Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.677624068s" Mar 19 11:33:47.541219 containerd[1949]: time="2025-03-19T11:33:47.540700826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 19 11:33:47.541556 containerd[1949]: time="2025-03-19T11:33:47.541497890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 19 11:33:49.232991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:33:49.242953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:33:49.300115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359041049.mount: Deactivated successfully. Mar 19 11:33:49.595625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:33:49.601061 (kubelet)[2596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:33:49.681378 kubelet[2596]: E0319 11:33:49.680661 2596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:33:49.685968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:33:49.686354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:33:49.686869 systemd[1]: kubelet.service: Consumed 295ms CPU time, 102M memory peak. Mar 19 11:33:50.021924 containerd[1949]: time="2025-03-19T11:33:50.021834758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:50.024296 containerd[1949]: time="2025-03-19T11:33:50.024215474Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370095" Mar 19 11:33:50.027298 containerd[1949]: time="2025-03-19T11:33:50.027214946Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:50.032691 containerd[1949]: time="2025-03-19T11:33:50.032614658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:50.034943 containerd[1949]: time="2025-03-19T11:33:50.034750310Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 2.493193152s" Mar 19 11:33:50.036406 containerd[1949]: time="2025-03-19T11:33:50.034852058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 19 11:33:50.037798 containerd[1949]: time="2025-03-19T11:33:50.037730414Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 19 11:33:50.730437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2909045199.mount: Deactivated successfully. Mar 19 11:33:52.115210 containerd[1949]: time="2025-03-19T11:33:52.113612069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:52.116514 containerd[1949]: time="2025-03-19T11:33:52.116452865Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Mar 19 11:33:52.118981 containerd[1949]: time="2025-03-19T11:33:52.118929965Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:52.125414 containerd[1949]: time="2025-03-19T11:33:52.125348993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:52.127817 containerd[1949]: time="2025-03-19T11:33:52.127769813Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.089971791s" Mar 19 11:33:52.127985 containerd[1949]: time="2025-03-19T11:33:52.127956341Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 19 11:33:52.128806 containerd[1949]: time="2025-03-19T11:33:52.128745137Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:33:52.660970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942926684.mount: Deactivated successfully. Mar 19 11:33:52.676360 containerd[1949]: time="2025-03-19T11:33:52.676283119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:52.678258 containerd[1949]: time="2025-03-19T11:33:52.678192163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 19 11:33:52.680856 containerd[1949]: time="2025-03-19T11:33:52.680786563Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:52.687322 containerd[1949]: time="2025-03-19T11:33:52.687268100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:52.689135 containerd[1949]: time="2025-03-19T11:33:52.688789796Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 559.829451ms" Mar 19 11:33:52.689135 containerd[1949]: time="2025-03-19T11:33:52.688842596Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 19 11:33:52.690828 containerd[1949]: time="2025-03-19T11:33:52.690753188Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 19 11:33:53.268310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170862273.mount: Deactivated successfully. Mar 19 11:33:53.270408 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 19 11:33:57.626809 containerd[1949]: time="2025-03-19T11:33:57.625998024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:57.628579 containerd[1949]: time="2025-03-19T11:33:57.628494804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Mar 19 11:33:57.631511 containerd[1949]: time="2025-03-19T11:33:57.631438344Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:57.642289 containerd[1949]: time="2025-03-19T11:33:57.642232788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:33:57.647405 containerd[1949]: time="2025-03-19T11:33:57.647261388Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.956423492s" Mar 19 11:33:57.647405 containerd[1949]: time="2025-03-19T11:33:57.647315796Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 19 11:33:59.733343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 19 11:33:59.742544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:00.084569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:00.086031 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:34:00.164844 kubelet[2746]: E0319 11:34:00.164780 2746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:34:00.168448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:34:00.168941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:34:00.169793 systemd[1]: kubelet.service: Consumed 275ms CPU time, 102.1M memory peak. Mar 19 11:34:04.903831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:04.905022 systemd[1]: kubelet.service: Consumed 275ms CPU time, 102.1M memory peak. Mar 19 11:34:04.916670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:04.974907 systemd[1]: Reload requested from client PID 2760 ('systemctl') (unit session-9.scope)... Mar 19 11:34:04.974940 systemd[1]: Reloading... Mar 19 11:34:05.274273 zram_generator::config[2808]: No configuration found. Mar 19 11:34:05.539108 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:34:05.783660 systemd[1]: Reloading finished in 808 ms. Mar 19 11:34:05.894515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:05.904407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:05.908784 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:34:05.910416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:05.910516 systemd[1]: kubelet.service: Consumed 234ms CPU time, 89.4M memory peak. Mar 19 11:34:05.922704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:06.238488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:06.251709 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:34:06.322974 kubelet[2870]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:34:06.322974 kubelet[2870]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:34:06.322974 kubelet[2870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:34:06.323563 kubelet[2870]: I0319 11:34:06.323068 2870 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:34:07.481161 update_engine[1937]: I20250319 11:34:07.480208 1937 update_attempter.cc:509] Updating boot flags... Mar 19 11:34:07.608955 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2890) Mar 19 11:34:08.023284 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2893) Mar 19 11:34:08.127564 kubelet[2870]: I0319 11:34:08.127498 2870 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:34:08.127564 kubelet[2870]: I0319 11:34:08.127552 2870 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:34:08.128153 kubelet[2870]: I0319 11:34:08.128022 2870 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:34:08.177113 kubelet[2870]: E0319 11:34:08.177051 2870 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.16.168:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:08.183028 kubelet[2870]: I0319 11:34:08.181984 2870 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:34:08.227030 kubelet[2870]: E0319 11:34:08.226964 2870 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:34:08.227030 kubelet[2870]: I0319 11:34:08.227024 2870 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:34:08.236534 kubelet[2870]: I0319 11:34:08.236446 2870 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:34:08.237064 kubelet[2870]: I0319 11:34:08.236998 2870 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:34:08.237691 kubelet[2870]: I0319 11:34:08.237051 2870 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-168","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:34:08.237897 kubelet[2870]: I0319 11:34:08.237726 2870 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:34:08.237897 kubelet[2870]: I0319 11:34:08.237752 2870 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:34:08.238007 kubelet[2870]: I0319 11:34:08.237974 2870 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:34:08.253523 kubelet[2870]: I0319 11:34:08.253261 2870 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:34:08.258780 kubelet[2870]: I0319 11:34:08.256272 2870 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:34:08.258780 kubelet[2870]: I0319 11:34:08.256329 2870 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:34:08.258780 kubelet[2870]: I0319 11:34:08.256366 2870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:34:08.277696 kubelet[2870]: W0319 11:34:08.272956 2870 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-168&limit=500&resourceVersion=0": dial tcp 172.31.16.168:6443: connect: connection refused Mar 19 11:34:08.277696 kubelet[2870]: E0319 11:34:08.273052 2870 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-168&limit=500&resourceVersion=0\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:08.277885 kubelet[2870]: W0319 11:34:08.277724 2870 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.168:6443: connect: connection refused Mar 19 11:34:08.277885 kubelet[2870]: E0319 11:34:08.277808 2870 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:08.277997 kubelet[2870]: I0319 11:34:08.277948 2870 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:34:08.288003 kubelet[2870]: I0319 11:34:08.287936 2870 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:34:08.288113 kubelet[2870]: W0319 11:34:08.288082 2870 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:34:08.307006 kubelet[2870]: I0319 11:34:08.306639 2870 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:34:08.307006 kubelet[2870]: I0319 11:34:08.306704 2870 server.go:1287] "Started kubelet" Mar 19 11:34:08.338531 kubelet[2870]: I0319 11:34:08.338497 2870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:34:08.339878 kubelet[2870]: E0319 11:34:08.339247 2870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.168:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.168:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-168.182e310f75a04edd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-168,UID:ip-172-31-16-168,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-168,},FirstTimestamp:2025-03-19 11:34:08.306671325 +0000 UTC m=+2.048426915,LastTimestamp:2025-03-19 11:34:08.306671325 +0000 UTC m=+2.048426915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-168,}" Mar 19 11:34:08.345411 kubelet[2870]: I0319 11:34:08.345328 2870 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:34:08.345709 kubelet[2870]: I0319 11:34:08.345655 2870 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:34:08.347277 kubelet[2870]: E0319 11:34:08.346118 2870 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-16-168\" not found" Mar 19 11:34:08.347277 kubelet[2870]: I0319 11:34:08.346351 2870 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:34:08.347277 kubelet[2870]: I0319 11:34:08.346700 2870 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:34:08.347277 kubelet[2870]: I0319 11:34:08.346964 2870 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:34:08.349683 kubelet[2870]: I0319 11:34:08.348690 2870 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:34:08.353073 kubelet[2870]: E0319 11:34:08.352991 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-168?timeout=10s\": dial tcp 172.31.16.168:6443: connect: connection refused" interval="200ms" Mar 19 11:34:08.353073 kubelet[2870]: I0319 11:34:08.353076 2870 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:34:08.353673 kubelet[2870]: I0319 11:34:08.353629 2870 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:34:08.353815 kubelet[2870]: I0319 11:34:08.353774 2870 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:34:08.354214 kubelet[2870]: E0319 11:34:08.354144 2870 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:34:08.355804 kubelet[2870]: W0319 11:34:08.355640 2870 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.168:6443: connect: connection refused Mar 19 11:34:08.355804 kubelet[2870]: E0319 11:34:08.355727 2870 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:08.356901 kubelet[2870]: I0319 11:34:08.356506 2870 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:34:08.357037 kubelet[2870]: I0319 11:34:08.356983 2870 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:34:08.379912 kubelet[2870]: I0319 11:34:08.379600 2870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:34:08.387707 kubelet[2870]: I0319 11:34:08.387340 2870 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:34:08.387707 kubelet[2870]: I0319 11:34:08.387387 2870 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:34:08.387707 kubelet[2870]: I0319 11:34:08.387417 2870 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:34:08.387707 kubelet[2870]: I0319 11:34:08.387435 2870 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:34:08.387707 kubelet[2870]: E0319 11:34:08.387504 2870 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:34:08.391250 kubelet[2870]: W0319 11:34:08.390767 2870 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.168:6443: connect: connection refused Mar 19 11:34:08.391250 kubelet[2870]: E0319 11:34:08.390832 2870 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:08.402507 kubelet[2870]: I0319 11:34:08.402464 2870 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:34:08.402507 kubelet[2870]: I0319 11:34:08.402498 2870 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:34:08.402714 kubelet[2870]: I0319 11:34:08.402531 2870 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:34:08.404620 kubelet[2870]: I0319 11:34:08.404574 2870 policy_none.go:49] "None policy: Start" Mar 19 11:34:08.404620 kubelet[2870]: I0319 11:34:08.404614 2870 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:34:08.404782 kubelet[2870]: I0319 11:34:08.404639 2870 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:34:08.414605 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:34:08.434212 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:34:08.441391 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:34:08.446931 kubelet[2870]: E0319 11:34:08.446891 2870 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-16-168\" not found" Mar 19 11:34:08.452821 kubelet[2870]: I0319 11:34:08.452789 2870 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:34:08.453661 kubelet[2870]: I0319 11:34:08.453238 2870 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:34:08.453661 kubelet[2870]: I0319 11:34:08.453264 2870 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:34:08.453661 kubelet[2870]: I0319 11:34:08.453440 2870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:34:08.456266 kubelet[2870]: E0319 11:34:08.455551 2870 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:34:08.456266 kubelet[2870]: E0319 11:34:08.455626 2870 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-168\" not found" Mar 19 11:34:08.507665 systemd[1]: Created slice kubepods-burstable-pod609e78101dd73a0d1dfc7161d4ffc2d1.slice - libcontainer container kubepods-burstable-pod609e78101dd73a0d1dfc7161d4ffc2d1.slice. Mar 19 11:34:08.517726 kubelet[2870]: E0319 11:34:08.517348 2870 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:08.523860 systemd[1]: Created slice kubepods-burstable-pod29065da2dfd8c30aaa5391fc9512f836.slice - libcontainer container kubepods-burstable-pod29065da2dfd8c30aaa5391fc9512f836.slice. Mar 19 11:34:08.530130 kubelet[2870]: E0319 11:34:08.528186 2870 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:08.533367 systemd[1]: Created slice kubepods-burstable-poda31f5c208bbaba3cfbf12ba84a917200.slice - libcontainer container kubepods-burstable-poda31f5c208bbaba3cfbf12ba84a917200.slice. Mar 19 11:34:08.537053 kubelet[2870]: E0319 11:34:08.537010 2870 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:08.554849 kubelet[2870]: E0319 11:34:08.554773 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-168?timeout=10s\": dial tcp 172.31.16.168:6443: connect: connection refused" interval="400ms" Mar 19 11:34:08.556410 kubelet[2870]: I0319 11:34:08.556360 2870 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-168" Mar 19 11:34:08.557063 kubelet[2870]: E0319 11:34:08.557004 2870 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.168:6443/api/v1/nodes\": dial tcp 172.31.16.168:6443: connect: connection refused" node="ip-172-31-16-168" Mar 19 11:34:08.558571 kubelet[2870]: I0319 11:34:08.558434 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/609e78101dd73a0d1dfc7161d4ffc2d1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-168\" (UID: \"609e78101dd73a0d1dfc7161d4ffc2d1\") " pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:08.558571 kubelet[2870]: I0319 11:34:08.558487 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:08.558571 kubelet[2870]: I0319 11:34:08.558529 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a31f5c208bbaba3cfbf12ba84a917200-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-168\" (UID: \"a31f5c208bbaba3cfbf12ba84a917200\") " pod="kube-system/kube-scheduler-ip-172-31-16-168" Mar 19 11:34:08.558786 kubelet[2870]: I0319 11:34:08.558581 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/609e78101dd73a0d1dfc7161d4ffc2d1-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-168\" (UID: \"609e78101dd73a0d1dfc7161d4ffc2d1\") " pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:08.558786 kubelet[2870]: I0319 11:34:08.558632 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:08.558786 kubelet[2870]: I0319 11:34:08.558678 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:08.558786 kubelet[2870]: I0319 11:34:08.558716 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:08.558978 kubelet[2870]: I0319 11:34:08.558785 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:08.558978 kubelet[2870]: I0319 11:34:08.558823 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/609e78101dd73a0d1dfc7161d4ffc2d1-ca-certs\") pod \"kube-apiserver-ip-172-31-16-168\" (UID: \"609e78101dd73a0d1dfc7161d4ffc2d1\") " pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:08.759965 kubelet[2870]: I0319 11:34:08.759864 2870 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-168" Mar 19 11:34:08.760598 kubelet[2870]: E0319 11:34:08.760552 2870 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.168:6443/api/v1/nodes\": dial tcp 172.31.16.168:6443: connect: connection refused" node="ip-172-31-16-168" Mar 19 11:34:08.819335 containerd[1949]: time="2025-03-19T11:34:08.819070332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-168,Uid:609e78101dd73a0d1dfc7161d4ffc2d1,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:08.830044 containerd[1949]: time="2025-03-19T11:34:08.829906716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-168,Uid:29065da2dfd8c30aaa5391fc9512f836,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:08.839107 containerd[1949]: time="2025-03-19T11:34:08.838884144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-168,Uid:a31f5c208bbaba3cfbf12ba84a917200,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:08.956125 kubelet[2870]: E0319 11:34:08.956069 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-168?timeout=10s\": dial tcp 172.31.16.168:6443: connect: connection refused" interval="800ms" Mar 19 11:34:09.162887 kubelet[2870]: I0319 11:34:09.162737 2870 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-168" Mar 19 11:34:09.163453 kubelet[2870]: E0319 11:34:09.163334 2870 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.168:6443/api/v1/nodes\": dial tcp 172.31.16.168:6443: connect: connection refused" node="ip-172-31-16-168" Mar 19 11:34:09.290224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758065703.mount: Deactivated successfully. Mar 19 11:34:09.308093 containerd[1949]: time="2025-03-19T11:34:09.307194070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:09.315715 containerd[1949]: time="2025-03-19T11:34:09.315635554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 19 11:34:09.317629 containerd[1949]: time="2025-03-19T11:34:09.317569246Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:09.320606 containerd[1949]: time="2025-03-19T11:34:09.320318998Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:09.323931 containerd[1949]: time="2025-03-19T11:34:09.323876158Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:09.326078 containerd[1949]: time="2025-03-19T11:34:09.326000506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:34:09.328015 containerd[1949]: time="2025-03-19T11:34:09.327613090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:34:09.330391 containerd[1949]: time="2025-03-19T11:34:09.330343870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:34:09.332471 containerd[1949]: time="2025-03-19T11:34:09.332404174Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.228926ms" Mar 19 11:34:09.339759 kubelet[2870]: W0319 11:34:09.339667 2870 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-168&limit=500&resourceVersion=0": dial tcp 172.31.16.168:6443: connect: connection refused Mar 19 11:34:09.339921 kubelet[2870]: E0319 11:34:09.339771 2870 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.16.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-168&limit=500&resourceVersion=0\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:09.341108 containerd[1949]: time="2025-03-19T11:34:09.340123222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.121814ms" Mar 19 11:34:09.344087 containerd[1949]: time="2025-03-19T11:34:09.343837510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.81467ms" Mar 19 11:34:09.437237 kubelet[2870]: W0319 11:34:09.436917 2870 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.16.168:6443: connect: connection refused Mar 19 11:34:09.437237 kubelet[2870]: E0319 11:34:09.437018 2870 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.16.168:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:09.559125 containerd[1949]: time="2025-03-19T11:34:09.558179735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:09.561713 containerd[1949]: time="2025-03-19T11:34:09.561406823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:09.561713 containerd[1949]: time="2025-03-19T11:34:09.561536219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:09.561713 containerd[1949]: time="2025-03-19T11:34:09.561575195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:09.562397 containerd[1949]: time="2025-03-19T11:34:09.562284335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:09.562397 containerd[1949]: time="2025-03-19T11:34:09.562336055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:09.563135 containerd[1949]: time="2025-03-19T11:34:09.562778939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:09.565153 containerd[1949]: time="2025-03-19T11:34:09.564960431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:09.567803 containerd[1949]: time="2025-03-19T11:34:09.567339779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:09.567803 containerd[1949]: time="2025-03-19T11:34:09.567432575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:09.567803 containerd[1949]: time="2025-03-19T11:34:09.567458351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:09.567803 containerd[1949]: time="2025-03-19T11:34:09.567584315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:09.620518 systemd[1]: Started cri-containerd-02cdabb7b2c8da96423e047ce9c5356e6eb71352893019e9b989dfa8e8ca9e23.scope - libcontainer container 02cdabb7b2c8da96423e047ce9c5356e6eb71352893019e9b989dfa8e8ca9e23. Mar 19 11:34:09.625458 systemd[1]: Started cri-containerd-cb9ee3d7d8bb9e144e04c57d3ae51e2a6c2bc7382ddaff7222882c1ff2233d1e.scope - libcontainer container cb9ee3d7d8bb9e144e04c57d3ae51e2a6c2bc7382ddaff7222882c1ff2233d1e. Mar 19 11:34:09.629132 systemd[1]: Started cri-containerd-ddd5db12e77fd3f1aeb33ded1d3acc9a1827c46e277648d574797beb9cb2e7c1.scope - libcontainer container ddd5db12e77fd3f1aeb33ded1d3acc9a1827c46e277648d574797beb9cb2e7c1. Mar 19 11:34:09.725296 containerd[1949]: time="2025-03-19T11:34:09.723662100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-168,Uid:a31f5c208bbaba3cfbf12ba84a917200,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddd5db12e77fd3f1aeb33ded1d3acc9a1827c46e277648d574797beb9cb2e7c1\"" Mar 19 11:34:09.736657 containerd[1949]: time="2025-03-19T11:34:09.734827668Z" level=info msg="CreateContainer within sandbox \"ddd5db12e77fd3f1aeb33ded1d3acc9a1827c46e277648d574797beb9cb2e7c1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:34:09.741739 kubelet[2870]: W0319 11:34:09.741553 2870 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.168:6443: connect: connection refused Mar 19 11:34:09.741739 kubelet[2870]: E0319 11:34:09.741675 2870 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.16.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:09.755705 containerd[1949]: time="2025-03-19T11:34:09.754741404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-168,Uid:609e78101dd73a0d1dfc7161d4ffc2d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb9ee3d7d8bb9e144e04c57d3ae51e2a6c2bc7382ddaff7222882c1ff2233d1e\"" Mar 19 11:34:09.758947 kubelet[2870]: E0319 11:34:09.758882 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-168?timeout=10s\": dial tcp 172.31.16.168:6443: connect: connection refused" interval="1.6s" Mar 19 11:34:09.773598 containerd[1949]: time="2025-03-19T11:34:09.773548524Z" level=info msg="CreateContainer within sandbox \"cb9ee3d7d8bb9e144e04c57d3ae51e2a6c2bc7382ddaff7222882c1ff2233d1e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:34:09.779407 containerd[1949]: time="2025-03-19T11:34:09.779345832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-168,Uid:29065da2dfd8c30aaa5391fc9512f836,Namespace:kube-system,Attempt:0,} returns sandbox id \"02cdabb7b2c8da96423e047ce9c5356e6eb71352893019e9b989dfa8e8ca9e23\"" Mar 19 11:34:09.785656 containerd[1949]: time="2025-03-19T11:34:09.784644600Z" level=info msg="CreateContainer within sandbox \"ddd5db12e77fd3f1aeb33ded1d3acc9a1827c46e277648d574797beb9cb2e7c1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a81566a4450475bbe2d4832b84967b355c84bb1429f24413d883b607cc5bfa17\"" Mar 19 11:34:09.785656 containerd[1949]: time="2025-03-19T11:34:09.785033520Z" level=info msg="CreateContainer within sandbox \"02cdabb7b2c8da96423e047ce9c5356e6eb71352893019e9b989dfa8e8ca9e23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:34:09.786653 containerd[1949]: time="2025-03-19T11:34:09.786589020Z" level=info msg="StartContainer for \"a81566a4450475bbe2d4832b84967b355c84bb1429f24413d883b607cc5bfa17\"" Mar 19 11:34:09.818239 kubelet[2870]: W0319 11:34:09.818147 2870 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.168:6443: connect: connection refused Mar 19 11:34:09.818513 kubelet[2870]: E0319 11:34:09.818478 2870 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.16.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.168:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:34:09.820305 containerd[1949]: time="2025-03-19T11:34:09.820252561Z" level=info msg="CreateContainer within sandbox \"cb9ee3d7d8bb9e144e04c57d3ae51e2a6c2bc7382ddaff7222882c1ff2233d1e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"88114dbef3affb8f111386166d9383fb6cb0806b7dad49fed458501623c8060b\"" Mar 19 11:34:09.823235 containerd[1949]: time="2025-03-19T11:34:09.821499445Z" level=info msg="StartContainer for \"88114dbef3affb8f111386166d9383fb6cb0806b7dad49fed458501623c8060b\"" Mar 19 11:34:09.831603 containerd[1949]: time="2025-03-19T11:34:09.831552181Z" level=info msg="CreateContainer within sandbox \"02cdabb7b2c8da96423e047ce9c5356e6eb71352893019e9b989dfa8e8ca9e23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0edb0eb314ef4e336e6e939b57a807228a13af9f340a138abef601aa45e2f1f2\"" Mar 19 11:34:09.834451 containerd[1949]: time="2025-03-19T11:34:09.833782921Z" level=info msg="StartContainer for \"0edb0eb314ef4e336e6e939b57a807228a13af9f340a138abef601aa45e2f1f2\"" Mar 19 11:34:09.842635 systemd[1]: Started cri-containerd-a81566a4450475bbe2d4832b84967b355c84bb1429f24413d883b607cc5bfa17.scope - libcontainer container a81566a4450475bbe2d4832b84967b355c84bb1429f24413d883b607cc5bfa17. Mar 19 11:34:09.911496 systemd[1]: Started cri-containerd-0edb0eb314ef4e336e6e939b57a807228a13af9f340a138abef601aa45e2f1f2.scope - libcontainer container 0edb0eb314ef4e336e6e939b57a807228a13af9f340a138abef601aa45e2f1f2. Mar 19 11:34:09.925587 systemd[1]: Started cri-containerd-88114dbef3affb8f111386166d9383fb6cb0806b7dad49fed458501623c8060b.scope - libcontainer container 88114dbef3affb8f111386166d9383fb6cb0806b7dad49fed458501623c8060b. Mar 19 11:34:09.968640 containerd[1949]: time="2025-03-19T11:34:09.965848729Z" level=info msg="StartContainer for \"a81566a4450475bbe2d4832b84967b355c84bb1429f24413d883b607cc5bfa17\" returns successfully" Mar 19 11:34:09.974590 kubelet[2870]: I0319 11:34:09.974515 2870 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-168" Mar 19 11:34:09.976022 kubelet[2870]: E0319 11:34:09.975781 2870 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.16.168:6443/api/v1/nodes\": dial tcp 172.31.16.168:6443: connect: connection refused" node="ip-172-31-16-168" Mar 19 11:34:10.063469 containerd[1949]: time="2025-03-19T11:34:10.063039658Z" level=info msg="StartContainer for \"0edb0eb314ef4e336e6e939b57a807228a13af9f340a138abef601aa45e2f1f2\" returns successfully" Mar 19 11:34:10.075664 containerd[1949]: time="2025-03-19T11:34:10.075577726Z" level=info msg="StartContainer for \"88114dbef3affb8f111386166d9383fb6cb0806b7dad49fed458501623c8060b\" returns successfully" Mar 19 11:34:10.417036 kubelet[2870]: E0319 11:34:10.416649 2870 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:10.424293 kubelet[2870]: E0319 11:34:10.423771 2870 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:10.429440 kubelet[2870]: E0319 11:34:10.429393 2870 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:11.433556 kubelet[2870]: E0319 11:34:11.433078 2870 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:11.436678 kubelet[2870]: E0319 11:34:11.436646 2870 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:11.578931 kubelet[2870]: I0319 11:34:11.578876 2870 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-168" Mar 19 11:34:13.632598 kubelet[2870]: E0319 11:34:13.632522 2870 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-168\" not found" node="ip-172-31-16-168" Mar 19 11:34:13.686476 kubelet[2870]: I0319 11:34:13.685243 2870 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-16-168" Mar 19 11:34:13.747770 kubelet[2870]: I0319 11:34:13.747709 2870 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:13.788195 kubelet[2870]: E0319 11:34:13.788003 2870 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-168\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:13.788195 kubelet[2870]: I0319 11:34:13.788072 2870 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-168" Mar 19 11:34:13.798183 kubelet[2870]: E0319 11:34:13.797522 2870 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-168\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-168" Mar 19 11:34:13.798183 kubelet[2870]: I0319 11:34:13.797565 2870 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:13.807834 kubelet[2870]: E0319 11:34:13.807762 2870 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-168\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:14.280807 kubelet[2870]: I0319 11:34:14.280756 2870 apiserver.go:52] "Watching apiserver" Mar 19 11:34:14.353814 kubelet[2870]: I0319 11:34:14.353742 2870 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:34:14.540636 kubelet[2870]: I0319 11:34:14.539897 2870 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:15.733531 systemd[1]: Reload requested from client PID 3329 ('systemctl') (unit session-9.scope)... Mar 19 11:34:15.734012 systemd[1]: Reloading... Mar 19 11:34:15.937232 zram_generator::config[3380]: No configuration found. Mar 19 11:34:16.197014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:34:16.456863 systemd[1]: Reloading finished in 722 ms. Mar 19 11:34:16.503887 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:16.520973 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:34:16.522282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:16.522382 systemd[1]: kubelet.service: Consumed 2.487s CPU time, 126.9M memory peak. Mar 19 11:34:16.528692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:34:16.893610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:34:16.909788 (kubelet)[3434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:34:17.006429 kubelet[3434]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:34:17.007776 kubelet[3434]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:34:17.007776 kubelet[3434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:34:17.008052 kubelet[3434]: I0319 11:34:17.007082 3434 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:34:17.026906 kubelet[3434]: I0319 11:34:17.026538 3434 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:34:17.026906 kubelet[3434]: I0319 11:34:17.026589 3434 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:34:17.028429 kubelet[3434]: I0319 11:34:17.028279 3434 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:34:17.030783 kubelet[3434]: I0319 11:34:17.030722 3434 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:34:17.039109 kubelet[3434]: I0319 11:34:17.037223 3434 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:34:17.045512 sudo[3448]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 19 11:34:17.047555 sudo[3448]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 19 11:34:17.048429 kubelet[3434]: E0319 11:34:17.047636 3434 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:34:17.048429 kubelet[3434]: I0319 11:34:17.047708 3434 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:34:17.056205 kubelet[3434]: I0319 11:34:17.055645 3434 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:34:17.056373 kubelet[3434]: I0319 11:34:17.056205 3434 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:34:17.056823 kubelet[3434]: I0319 11:34:17.056255 3434 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-168","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:34:17.056823 kubelet[3434]: I0319 11:34:17.056601 3434 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:34:17.056823 kubelet[3434]: I0319 11:34:17.056623 3434 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:34:17.056823 kubelet[3434]: I0319 11:34:17.056720 3434 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:34:17.057147 kubelet[3434]: I0319 11:34:17.056988 3434 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:34:17.057147 kubelet[3434]: I0319 11:34:17.057010 3434 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:34:17.063217 kubelet[3434]: I0319 11:34:17.060197 3434 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:34:17.063217 kubelet[3434]: I0319 11:34:17.060284 3434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:34:17.064392 kubelet[3434]: I0319 11:34:17.064320 3434 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:34:17.065903 kubelet[3434]: I0319 11:34:17.065109 3434 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:34:17.066043 kubelet[3434]: I0319 11:34:17.065953 3434 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:34:17.066043 kubelet[3434]: I0319 11:34:17.065999 3434 server.go:1287] "Started kubelet" Mar 19 11:34:17.070662 kubelet[3434]: I0319 11:34:17.070117 3434 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:34:17.076675 kubelet[3434]: I0319 11:34:17.075886 3434 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:34:17.076918 kubelet[3434]: I0319 11:34:17.076877 3434 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:34:17.080939 kubelet[3434]: I0319 11:34:17.080693 3434 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:34:17.097216 kubelet[3434]: I0319 11:34:17.092993 3434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:34:17.100506 kubelet[3434]: I0319 11:34:17.099556 3434 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:34:17.125406 kubelet[3434]: I0319 11:34:17.125361 3434 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:34:17.126303 kubelet[3434]: E0319 11:34:17.125723 3434 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-16-168\" not found" Mar 19 11:34:17.128124 kubelet[3434]: I0319 11:34:17.128078 3434 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:34:17.130229 kubelet[3434]: I0319 11:34:17.128391 3434 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:34:17.154612 kubelet[3434]: I0319 11:34:17.151424 3434 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:34:17.154612 kubelet[3434]: I0319 11:34:17.151581 3434 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:34:17.154612 kubelet[3434]: I0319 11:34:17.152150 3434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:34:17.156197 kubelet[3434]: I0319 11:34:17.155647 3434 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:34:17.156197 kubelet[3434]: I0319 11:34:17.155694 3434 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:34:17.156197 kubelet[3434]: I0319 11:34:17.155728 3434 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:34:17.156197 kubelet[3434]: I0319 11:34:17.155743 3434 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:34:17.156197 kubelet[3434]: E0319 11:34:17.155809 3434 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:34:17.197892 kubelet[3434]: I0319 11:34:17.196534 3434 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:34:17.202199 kubelet[3434]: E0319 11:34:17.202011 3434 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:34:17.256515 kubelet[3434]: E0319 11:34:17.256467 3434 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 11:34:17.344321 kubelet[3434]: I0319 11:34:17.343868 3434 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:34:17.344321 kubelet[3434]: I0319 11:34:17.343902 3434 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:34:17.344321 kubelet[3434]: I0319 11:34:17.343963 3434 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:34:17.346232 kubelet[3434]: I0319 11:34:17.344878 3434 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:34:17.346232 kubelet[3434]: I0319 11:34:17.344932 3434 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:34:17.346232 kubelet[3434]: I0319 11:34:17.345822 3434 policy_none.go:49] "None policy: Start" Mar 19 11:34:17.346232 kubelet[3434]: I0319 11:34:17.345849 3434 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:34:17.346232 kubelet[3434]: I0319 11:34:17.345880 3434 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:34:17.346232 kubelet[3434]: I0319 11:34:17.346144 3434 state_mem.go:75] "Updated machine memory state" Mar 19 11:34:17.362182 kubelet[3434]: I0319 11:34:17.362119 3434 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:34:17.362836 kubelet[3434]: I0319 11:34:17.362628 3434 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:34:17.363758 kubelet[3434]: I0319 11:34:17.363697 3434 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:34:17.366011 kubelet[3434]: I0319 11:34:17.365570 3434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:34:17.373688 kubelet[3434]: E0319 11:34:17.372733 3434 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:34:17.457664 kubelet[3434]: I0319 11:34:17.457536 3434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:17.459032 kubelet[3434]: I0319 11:34:17.457582 3434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-168" Mar 19 11:34:17.459657 kubelet[3434]: I0319 11:34:17.459214 3434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:17.475077 kubelet[3434]: E0319 11:34:17.474957 3434 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-168\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:17.490787 kubelet[3434]: I0319 11:34:17.490364 3434 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-16-168" Mar 19 11:34:17.514525 kubelet[3434]: I0319 11:34:17.514095 3434 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-16-168" Mar 19 11:34:17.514525 kubelet[3434]: I0319 11:34:17.514267 3434 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-16-168" Mar 19 11:34:17.542459 kubelet[3434]: I0319 11:34:17.541910 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/609e78101dd73a0d1dfc7161d4ffc2d1-ca-certs\") pod \"kube-apiserver-ip-172-31-16-168\" (UID: \"609e78101dd73a0d1dfc7161d4ffc2d1\") " pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:17.542459 kubelet[3434]: I0319 11:34:17.541972 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/609e78101dd73a0d1dfc7161d4ffc2d1-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-168\" (UID: \"609e78101dd73a0d1dfc7161d4ffc2d1\") " pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:17.542459 kubelet[3434]: I0319 11:34:17.542016 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:17.542459 kubelet[3434]: I0319 11:34:17.542061 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:17.542459 kubelet[3434]: I0319 11:34:17.542109 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:17.542846 kubelet[3434]: I0319 11:34:17.542149 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:17.544059 kubelet[3434]: I0319 11:34:17.543304 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/609e78101dd73a0d1dfc7161d4ffc2d1-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-168\" (UID: \"609e78101dd73a0d1dfc7161d4ffc2d1\") " pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:17.544059 kubelet[3434]: I0319 11:34:17.543564 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29065da2dfd8c30aaa5391fc9512f836-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-168\" (UID: \"29065da2dfd8c30aaa5391fc9512f836\") " pod="kube-system/kube-controller-manager-ip-172-31-16-168" Mar 19 11:34:17.544059 kubelet[3434]: I0319 11:34:17.543700 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a31f5c208bbaba3cfbf12ba84a917200-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-168\" (UID: \"a31f5c208bbaba3cfbf12ba84a917200\") " pod="kube-system/kube-scheduler-ip-172-31-16-168" Mar 19 11:34:18.021206 sudo[3448]: pam_unix(sudo:session): session closed for user root Mar 19 11:34:18.061390 kubelet[3434]: I0319 11:34:18.061329 3434 apiserver.go:52] "Watching apiserver" Mar 19 11:34:18.129116 kubelet[3434]: I0319 11:34:18.128981 3434 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:34:18.240946 kubelet[3434]: I0319 11:34:18.240887 3434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:18.243950 kubelet[3434]: I0319 11:34:18.243366 3434 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-168" Mar 19 11:34:18.249090 kubelet[3434]: E0319 11:34:18.248935 3434 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-168\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-168" Mar 19 11:34:18.258395 kubelet[3434]: E0319 11:34:18.258334 3434 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-168\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-168" Mar 19 11:34:18.301216 kubelet[3434]: I0319 11:34:18.300975 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-168" podStartSLOduration=4.300952747 podStartE2EDuration="4.300952747s" podCreationTimestamp="2025-03-19 11:34:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:18.284676355 +0000 UTC m=+1.364429384" watchObservedRunningTime="2025-03-19 11:34:18.300952747 +0000 UTC m=+1.380705776" Mar 19 11:34:18.302107 kubelet[3434]: I0319 11:34:18.301186 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-168" podStartSLOduration=1.301158211 podStartE2EDuration="1.301158211s" podCreationTimestamp="2025-03-19 11:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:18.299686015 +0000 UTC m=+1.379439044" watchObservedRunningTime="2025-03-19 11:34:18.301158211 +0000 UTC m=+1.380911240" Mar 19 11:34:18.374297 kubelet[3434]: I0319 11:34:18.374216 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-168" podStartSLOduration=1.374189671 podStartE2EDuration="1.374189671s" podCreationTimestamp="2025-03-19 11:34:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:18.323836891 +0000 UTC m=+1.403589944" watchObservedRunningTime="2025-03-19 11:34:18.374189671 +0000 UTC m=+1.453942736" Mar 19 11:34:20.559056 sudo[2297]: pam_unix(sudo:session): session closed for user root Mar 19 11:34:20.582880 sshd[2296]: Connection closed by 139.178.68.195 port 46052 Mar 19 11:34:20.583464 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Mar 19 11:34:20.592161 systemd[1]: sshd@8-172.31.16.168:22-139.178.68.195:46052.service: Deactivated successfully. Mar 19 11:34:20.602219 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:34:20.604488 systemd[1]: session-9.scope: Consumed 10.920s CPU time, 262.1M memory peak. Mar 19 11:34:20.609747 systemd-logind[1936]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:34:20.613721 systemd-logind[1936]: Removed session 9. Mar 19 11:34:20.722466 kubelet[3434]: I0319 11:34:20.722407 3434 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:34:20.722988 containerd[1949]: time="2025-03-19T11:34:20.722847683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:34:20.723468 kubelet[3434]: I0319 11:34:20.723128 3434 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:34:21.611227 systemd[1]: Created slice kubepods-besteffort-poda78d3aab_a1b3_420e_9764_280f1fd1d6f8.slice - libcontainer container kubepods-besteffort-poda78d3aab_a1b3_420e_9764_280f1fd1d6f8.slice. Mar 19 11:34:21.630766 systemd[1]: Created slice kubepods-burstable-pod8f6c5844_fc22_4cc2_8edb_648a4fa4d836.slice - libcontainer container kubepods-burstable-pod8f6c5844_fc22_4cc2_8edb_648a4fa4d836.slice. Mar 19 11:34:21.668569 kubelet[3434]: I0319 11:34:21.668499 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a78d3aab-a1b3-420e-9764-280f1fd1d6f8-xtables-lock\") pod \"kube-proxy-c9fdc\" (UID: \"a78d3aab-a1b3-420e-9764-280f1fd1d6f8\") " pod="kube-system/kube-proxy-c9fdc" Mar 19 11:34:21.668703 kubelet[3434]: I0319 11:34:21.668570 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7pr4\" (UniqueName: \"kubernetes.io/projected/a78d3aab-a1b3-420e-9764-280f1fd1d6f8-kube-api-access-w7pr4\") pod \"kube-proxy-c9fdc\" (UID: \"a78d3aab-a1b3-420e-9764-280f1fd1d6f8\") " pod="kube-system/kube-proxy-c9fdc" Mar 19 11:34:21.668703 kubelet[3434]: I0319 11:34:21.668622 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-bpf-maps\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.668703 kubelet[3434]: I0319 11:34:21.668658 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-host-proc-sys-net\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.668703 kubelet[3434]: I0319 11:34:21.668699 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-hostproc\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.668942 kubelet[3434]: I0319 11:34:21.668739 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-cgroup\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.668942 kubelet[3434]: I0319 11:34:21.668777 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cni-path\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.668942 kubelet[3434]: I0319 11:34:21.668826 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-host-proc-sys-kernel\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.668942 kubelet[3434]: I0319 11:34:21.668862 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-xtables-lock\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.668942 kubelet[3434]: I0319 11:34:21.668906 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a78d3aab-a1b3-420e-9764-280f1fd1d6f8-lib-modules\") pod \"kube-proxy-c9fdc\" (UID: \"a78d3aab-a1b3-420e-9764-280f1fd1d6f8\") " pod="kube-system/kube-proxy-c9fdc" Mar 19 11:34:21.668942 kubelet[3434]: I0319 11:34:21.668941 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-hubble-tls\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.669266 kubelet[3434]: I0319 11:34:21.668991 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a78d3aab-a1b3-420e-9764-280f1fd1d6f8-kube-proxy\") pod \"kube-proxy-c9fdc\" (UID: \"a78d3aab-a1b3-420e-9764-280f1fd1d6f8\") " pod="kube-system/kube-proxy-c9fdc" Mar 19 11:34:21.669266 kubelet[3434]: I0319 11:34:21.669029 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-etc-cni-netd\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.669266 kubelet[3434]: I0319 11:34:21.669070 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-clustermesh-secrets\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.669266 kubelet[3434]: I0319 11:34:21.669106 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc4wt\" (UniqueName: \"kubernetes.io/projected/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-kube-api-access-hc4wt\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.669266 kubelet[3434]: I0319 11:34:21.669147 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-lib-modules\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.669497 kubelet[3434]: I0319 11:34:21.669207 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-config-path\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.669497 kubelet[3434]: I0319 11:34:21.669253 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-run\") pod \"cilium-c9gk7\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " pod="kube-system/cilium-c9gk7" Mar 19 11:34:21.708137 systemd[1]: Created slice kubepods-besteffort-pod8c1448f8_47d8_4eee_9db3_5114bbc55e18.slice - libcontainer container kubepods-besteffort-pod8c1448f8_47d8_4eee_9db3_5114bbc55e18.slice. Mar 19 11:34:21.770470 kubelet[3434]: I0319 11:34:21.770399 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c1448f8-47d8-4eee-9db3-5114bbc55e18-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-q24wq\" (UID: \"8c1448f8-47d8-4eee-9db3-5114bbc55e18\") " pod="kube-system/cilium-operator-6c4d7847fc-q24wq" Mar 19 11:34:21.771067 kubelet[3434]: I0319 11:34:21.770471 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzw27\" (UniqueName: \"kubernetes.io/projected/8c1448f8-47d8-4eee-9db3-5114bbc55e18-kube-api-access-fzw27\") pod \"cilium-operator-6c4d7847fc-q24wq\" (UID: \"8c1448f8-47d8-4eee-9db3-5114bbc55e18\") " pod="kube-system/cilium-operator-6c4d7847fc-q24wq" Mar 19 11:34:21.930972 containerd[1949]: time="2025-03-19T11:34:21.930811105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9fdc,Uid:a78d3aab-a1b3-420e-9764-280f1fd1d6f8,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:21.943946 containerd[1949]: time="2025-03-19T11:34:21.943582237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c9gk7,Uid:8f6c5844-fc22-4cc2-8edb-648a4fa4d836,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:21.988396 containerd[1949]: time="2025-03-19T11:34:21.988199185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:21.988396 containerd[1949]: time="2025-03-19T11:34:21.988309981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:21.988706 containerd[1949]: time="2025-03-19T11:34:21.988366585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:21.990983 containerd[1949]: time="2025-03-19T11:34:21.990766465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:22.003252 containerd[1949]: time="2025-03-19T11:34:22.000869445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:22.003252 containerd[1949]: time="2025-03-19T11:34:22.001703529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:22.003252 containerd[1949]: time="2025-03-19T11:34:22.001892697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:22.003252 containerd[1949]: time="2025-03-19T11:34:22.002252169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:22.019302 containerd[1949]: time="2025-03-19T11:34:22.018843813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q24wq,Uid:8c1448f8-47d8-4eee-9db3-5114bbc55e18,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:22.027872 systemd[1]: Started cri-containerd-7d887465f5775b69b111246227a1ffe72f6bd6bdaf20b218611a6cbb5ffcb265.scope - libcontainer container 7d887465f5775b69b111246227a1ffe72f6bd6bdaf20b218611a6cbb5ffcb265. Mar 19 11:34:22.061064 systemd[1]: Started cri-containerd-84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df.scope - libcontainer container 84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df. Mar 19 11:34:22.141868 containerd[1949]: time="2025-03-19T11:34:22.141682078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c9fdc,Uid:a78d3aab-a1b3-420e-9764-280f1fd1d6f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d887465f5775b69b111246227a1ffe72f6bd6bdaf20b218611a6cbb5ffcb265\"" Mar 19 11:34:22.154560 containerd[1949]: time="2025-03-19T11:34:22.154460842Z" level=info msg="CreateContainer within sandbox \"7d887465f5775b69b111246227a1ffe72f6bd6bdaf20b218611a6cbb5ffcb265\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:34:22.158726 containerd[1949]: time="2025-03-19T11:34:22.156162070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:22.158726 containerd[1949]: time="2025-03-19T11:34:22.156292030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:22.158726 containerd[1949]: time="2025-03-19T11:34:22.156380230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:22.158726 containerd[1949]: time="2025-03-19T11:34:22.156561898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:22.176837 containerd[1949]: time="2025-03-19T11:34:22.176601250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c9gk7,Uid:8f6c5844-fc22-4cc2-8edb-648a4fa4d836,Namespace:kube-system,Attempt:0,} returns sandbox id \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\"" Mar 19 11:34:22.187046 containerd[1949]: time="2025-03-19T11:34:22.186684670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 19 11:34:22.221771 containerd[1949]: time="2025-03-19T11:34:22.221702290Z" level=info msg="CreateContainer within sandbox \"7d887465f5775b69b111246227a1ffe72f6bd6bdaf20b218611a6cbb5ffcb265\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"94fa5dd7648cc1f47ed1604a3ffbcd74c95196740ea6727a51e477d04014929b\"" Mar 19 11:34:22.226256 containerd[1949]: time="2025-03-19T11:34:22.222949930Z" level=info msg="StartContainer for \"94fa5dd7648cc1f47ed1604a3ffbcd74c95196740ea6727a51e477d04014929b\"" Mar 19 11:34:22.259685 systemd[1]: Started cri-containerd-4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f.scope - libcontainer container 4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f. Mar 19 11:34:22.334494 systemd[1]: Started cri-containerd-94fa5dd7648cc1f47ed1604a3ffbcd74c95196740ea6727a51e477d04014929b.scope - libcontainer container 94fa5dd7648cc1f47ed1604a3ffbcd74c95196740ea6727a51e477d04014929b. Mar 19 11:34:22.398633 containerd[1949]: time="2025-03-19T11:34:22.398580755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q24wq,Uid:8c1448f8-47d8-4eee-9db3-5114bbc55e18,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\"" Mar 19 11:34:22.424907 containerd[1949]: time="2025-03-19T11:34:22.424836563Z" level=info msg="StartContainer for \"94fa5dd7648cc1f47ed1604a3ffbcd74c95196740ea6727a51e477d04014929b\" returns successfully" Mar 19 11:34:23.303381 kubelet[3434]: I0319 11:34:23.303246 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c9fdc" podStartSLOduration=2.303219948 podStartE2EDuration="2.303219948s" podCreationTimestamp="2025-03-19 11:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:23.301104024 +0000 UTC m=+6.380857077" watchObservedRunningTime="2025-03-19 11:34:23.303219948 +0000 UTC m=+6.382972989" Mar 19 11:34:32.032914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729431176.mount: Deactivated successfully. Mar 19 11:34:34.520477 containerd[1949]: time="2025-03-19T11:34:34.520219655Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:34.521956 containerd[1949]: time="2025-03-19T11:34:34.521892623Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 19 11:34:34.522758 containerd[1949]: time="2025-03-19T11:34:34.522676895Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:34.526217 containerd[1949]: time="2025-03-19T11:34:34.526005719Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.339261685s" Mar 19 11:34:34.526217 containerd[1949]: time="2025-03-19T11:34:34.526062515Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 19 11:34:34.527919 containerd[1949]: time="2025-03-19T11:34:34.527853587Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 19 11:34:34.531338 containerd[1949]: time="2025-03-19T11:34:34.530991827Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:34:34.556365 containerd[1949]: time="2025-03-19T11:34:34.556279931Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\"" Mar 19 11:34:34.557453 containerd[1949]: time="2025-03-19T11:34:34.557298515Z" level=info msg="StartContainer for \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\"" Mar 19 11:34:34.614497 systemd[1]: Started cri-containerd-d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4.scope - libcontainer container d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4. Mar 19 11:34:34.659826 containerd[1949]: time="2025-03-19T11:34:34.659704512Z" level=info msg="StartContainer for \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\" returns successfully" Mar 19 11:34:34.682675 systemd[1]: cri-containerd-d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4.scope: Deactivated successfully. Mar 19 11:34:35.545707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4-rootfs.mount: Deactivated successfully. Mar 19 11:34:35.773795 containerd[1949]: time="2025-03-19T11:34:35.773706062Z" level=info msg="shim disconnected" id=d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4 namespace=k8s.io Mar 19 11:34:35.773795 containerd[1949]: time="2025-03-19T11:34:35.773782646Z" level=warning msg="cleaning up after shim disconnected" id=d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4 namespace=k8s.io Mar 19 11:34:35.774441 containerd[1949]: time="2025-03-19T11:34:35.773804618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:34:36.338757 containerd[1949]: time="2025-03-19T11:34:36.338686536Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:34:36.368560 containerd[1949]: time="2025-03-19T11:34:36.368484276Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\"" Mar 19 11:34:36.372036 containerd[1949]: time="2025-03-19T11:34:36.371915917Z" level=info msg="StartContainer for \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\"" Mar 19 11:34:36.430511 systemd[1]: Started cri-containerd-ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a.scope - libcontainer container ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a. Mar 19 11:34:36.479547 containerd[1949]: time="2025-03-19T11:34:36.479437333Z" level=info msg="StartContainer for \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\" returns successfully" Mar 19 11:34:36.502065 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:34:36.502980 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:34:36.503562 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:34:36.513581 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:34:36.521006 systemd[1]: cri-containerd-ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a.scope: Deactivated successfully. Mar 19 11:34:36.563458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:34:36.582619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a-rootfs.mount: Deactivated successfully. Mar 19 11:34:36.590669 containerd[1949]: time="2025-03-19T11:34:36.589628990Z" level=info msg="shim disconnected" id=ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a namespace=k8s.io Mar 19 11:34:36.590669 containerd[1949]: time="2025-03-19T11:34:36.589730798Z" level=warning msg="cleaning up after shim disconnected" id=ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a namespace=k8s.io Mar 19 11:34:36.590669 containerd[1949]: time="2025-03-19T11:34:36.589753358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:34:37.346203 containerd[1949]: time="2025-03-19T11:34:37.344487061Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:34:37.388230 containerd[1949]: time="2025-03-19T11:34:37.387533618Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\"" Mar 19 11:34:37.391915 containerd[1949]: time="2025-03-19T11:34:37.391025570Z" level=info msg="StartContainer for \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\"" Mar 19 11:34:37.449491 systemd[1]: Started cri-containerd-f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2.scope - libcontainer container f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2. Mar 19 11:34:37.519831 containerd[1949]: time="2025-03-19T11:34:37.519755690Z" level=info msg="StartContainer for \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\" returns successfully" Mar 19 11:34:37.533640 systemd[1]: cri-containerd-f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2.scope: Deactivated successfully. Mar 19 11:34:37.583647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2-rootfs.mount: Deactivated successfully. Mar 19 11:34:37.586787 containerd[1949]: time="2025-03-19T11:34:37.586672119Z" level=info msg="shim disconnected" id=f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2 namespace=k8s.io Mar 19 11:34:37.586962 containerd[1949]: time="2025-03-19T11:34:37.586811823Z" level=warning msg="cleaning up after shim disconnected" id=f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2 namespace=k8s.io Mar 19 11:34:37.586962 containerd[1949]: time="2025-03-19T11:34:37.586837047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:34:38.079439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785520008.mount: Deactivated successfully. Mar 19 11:34:38.354194 containerd[1949]: time="2025-03-19T11:34:38.353192822Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:34:38.382731 containerd[1949]: time="2025-03-19T11:34:38.381595118Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\"" Mar 19 11:34:38.384196 containerd[1949]: time="2025-03-19T11:34:38.384079130Z" level=info msg="StartContainer for \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\"" Mar 19 11:34:38.438496 systemd[1]: Started cri-containerd-cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660.scope - libcontainer container cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660. Mar 19 11:34:38.492892 systemd[1]: cri-containerd-cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660.scope: Deactivated successfully. Mar 19 11:34:38.496569 containerd[1949]: time="2025-03-19T11:34:38.494844051Z" level=info msg="StartContainer for \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\" returns successfully" Mar 19 11:34:38.556274 containerd[1949]: time="2025-03-19T11:34:38.556152303Z" level=info msg="shim disconnected" id=cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660 namespace=k8s.io Mar 19 11:34:38.556274 containerd[1949]: time="2025-03-19T11:34:38.556271595Z" level=warning msg="cleaning up after shim disconnected" id=cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660 namespace=k8s.io Mar 19 11:34:38.556274 containerd[1949]: time="2025-03-19T11:34:38.556319067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:34:39.360609 containerd[1949]: time="2025-03-19T11:34:39.360505227Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:34:39.404852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2258850511.mount: Deactivated successfully. Mar 19 11:34:39.409002 containerd[1949]: time="2025-03-19T11:34:39.408846172Z" level=info msg="CreateContainer within sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\"" Mar 19 11:34:39.414244 containerd[1949]: time="2025-03-19T11:34:39.413145556Z" level=info msg="StartContainer for \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\"" Mar 19 11:34:39.476529 systemd[1]: Started cri-containerd-1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472.scope - libcontainer container 1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472. Mar 19 11:34:39.571392 containerd[1949]: time="2025-03-19T11:34:39.570915484Z" level=info msg="StartContainer for \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\" returns successfully" Mar 19 11:34:39.865553 kubelet[3434]: I0319 11:34:39.864622 3434 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 19 11:34:39.955879 systemd[1]: Created slice kubepods-burstable-pod610289e2_59cb_4edc_b724_65d69ebc84a4.slice - libcontainer container kubepods-burstable-pod610289e2_59cb_4edc_b724_65d69ebc84a4.slice. Mar 19 11:34:39.998094 systemd[1]: Created slice kubepods-burstable-pod171f357b_0e5b_4e68_af9d_3c235bc55648.slice - libcontainer container kubepods-burstable-pod171f357b_0e5b_4e68_af9d_3c235bc55648.slice. Mar 19 11:34:40.005376 kubelet[3434]: I0319 11:34:40.002337 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/171f357b-0e5b-4e68-af9d-3c235bc55648-config-volume\") pod \"coredns-668d6bf9bc-v659z\" (UID: \"171f357b-0e5b-4e68-af9d-3c235bc55648\") " pod="kube-system/coredns-668d6bf9bc-v659z" Mar 19 11:34:40.005376 kubelet[3434]: I0319 11:34:40.002729 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7d7\" (UniqueName: \"kubernetes.io/projected/171f357b-0e5b-4e68-af9d-3c235bc55648-kube-api-access-bj7d7\") pod \"coredns-668d6bf9bc-v659z\" (UID: \"171f357b-0e5b-4e68-af9d-3c235bc55648\") " pod="kube-system/coredns-668d6bf9bc-v659z" Mar 19 11:34:40.005376 kubelet[3434]: I0319 11:34:40.002829 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tlj\" (UniqueName: \"kubernetes.io/projected/610289e2-59cb-4edc-b724-65d69ebc84a4-kube-api-access-48tlj\") pod \"coredns-668d6bf9bc-clxlv\" (UID: \"610289e2-59cb-4edc-b724-65d69ebc84a4\") " pod="kube-system/coredns-668d6bf9bc-clxlv" Mar 19 11:34:40.005376 kubelet[3434]: I0319 11:34:40.003092 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/610289e2-59cb-4edc-b724-65d69ebc84a4-config-volume\") pod \"coredns-668d6bf9bc-clxlv\" (UID: \"610289e2-59cb-4edc-b724-65d69ebc84a4\") " pod="kube-system/coredns-668d6bf9bc-clxlv" Mar 19 11:34:40.267439 containerd[1949]: time="2025-03-19T11:34:40.266961628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-clxlv,Uid:610289e2-59cb-4edc-b724-65d69ebc84a4,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:40.314684 containerd[1949]: time="2025-03-19T11:34:40.314264140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v659z,Uid:171f357b-0e5b-4e68-af9d-3c235bc55648,Namespace:kube-system,Attempt:0,}" Mar 19 11:34:40.461346 kubelet[3434]: I0319 11:34:40.460756 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c9gk7" podStartSLOduration=7.117086812 podStartE2EDuration="19.460725389s" podCreationTimestamp="2025-03-19 11:34:21 +0000 UTC" firstStartedPulling="2025-03-19 11:34:22.183916702 +0000 UTC m=+5.263669743" lastFinishedPulling="2025-03-19 11:34:34.527555291 +0000 UTC m=+17.607308320" observedRunningTime="2025-03-19 11:34:40.458336609 +0000 UTC m=+23.538089686" watchObservedRunningTime="2025-03-19 11:34:40.460725389 +0000 UTC m=+23.540478442" Mar 19 11:34:40.832078 containerd[1949]: time="2025-03-19T11:34:40.832009027Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:40.835525 containerd[1949]: time="2025-03-19T11:34:40.835426687Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 19 11:34:40.837892 containerd[1949]: time="2025-03-19T11:34:40.837828583Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:34:40.845934 containerd[1949]: time="2025-03-19T11:34:40.845851051Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.317896304s" Mar 19 11:34:40.846099 containerd[1949]: time="2025-03-19T11:34:40.845929147Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 19 11:34:40.852907 containerd[1949]: time="2025-03-19T11:34:40.852045895Z" level=info msg="CreateContainer within sandbox \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 19 11:34:40.884693 containerd[1949]: time="2025-03-19T11:34:40.884497783Z" level=info msg="CreateContainer within sandbox \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\"" Mar 19 11:34:40.889289 containerd[1949]: time="2025-03-19T11:34:40.889238323Z" level=info msg="StartContainer for \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\"" Mar 19 11:34:40.976702 systemd[1]: Started cri-containerd-c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670.scope - libcontainer container c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670. Mar 19 11:34:41.060785 containerd[1949]: time="2025-03-19T11:34:41.060600520Z" level=info msg="StartContainer for \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\" returns successfully" Mar 19 11:34:44.987766 systemd-networkd[1866]: cilium_host: Link UP Mar 19 11:34:44.988091 systemd-networkd[1866]: cilium_net: Link UP Mar 19 11:34:44.988512 systemd-networkd[1866]: cilium_net: Gained carrier Mar 19 11:34:44.991625 systemd-networkd[1866]: cilium_host: Gained carrier Mar 19 11:34:44.999981 (udev-worker)[4260]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:34:45.000886 (udev-worker)[4261]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:34:45.066663 systemd-networkd[1866]: cilium_net: Gained IPv6LL Mar 19 11:34:45.182118 systemd-networkd[1866]: cilium_vxlan: Link UP Mar 19 11:34:45.182141 systemd-networkd[1866]: cilium_vxlan: Gained carrier Mar 19 11:34:45.676212 kernel: NET: Registered PF_ALG protocol family Mar 19 11:34:45.833745 systemd-networkd[1866]: cilium_host: Gained IPv6LL Mar 19 11:34:46.666914 systemd-networkd[1866]: cilium_vxlan: Gained IPv6LL Mar 19 11:34:46.998653 systemd-networkd[1866]: lxc_health: Link UP Mar 19 11:34:47.003761 (udev-worker)[4272]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:34:47.006250 systemd-networkd[1866]: lxc_health: Gained carrier Mar 19 11:34:47.425336 kernel: eth0: renamed from tmp6834e Mar 19 11:34:47.430488 systemd-networkd[1866]: lxce1496f5eb20d: Link UP Mar 19 11:34:47.439081 systemd-networkd[1866]: lxce1496f5eb20d: Gained carrier Mar 19 11:34:47.484337 kernel: eth0: renamed from tmp93f3a Mar 19 11:34:47.492588 systemd-networkd[1866]: lxc335972694806: Link UP Mar 19 11:34:47.500578 systemd-networkd[1866]: lxc335972694806: Gained carrier Mar 19 11:34:47.983054 kubelet[3434]: I0319 11:34:47.982838 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-q24wq" podStartSLOduration=8.54294579 podStartE2EDuration="26.98278649s" podCreationTimestamp="2025-03-19 11:34:21 +0000 UTC" firstStartedPulling="2025-03-19 11:34:22.407553935 +0000 UTC m=+5.487306976" lastFinishedPulling="2025-03-19 11:34:40.847394659 +0000 UTC m=+23.927147676" observedRunningTime="2025-03-19 11:34:41.482432886 +0000 UTC m=+24.562185939" watchObservedRunningTime="2025-03-19 11:34:47.98278649 +0000 UTC m=+31.062539531" Mar 19 11:34:48.649461 systemd-networkd[1866]: lxc335972694806: Gained IPv6LL Mar 19 11:34:48.841990 systemd-networkd[1866]: lxc_health: Gained IPv6LL Mar 19 11:34:49.353965 systemd-networkd[1866]: lxce1496f5eb20d: Gained IPv6LL Mar 19 11:34:52.329984 ntpd[1930]: Listen normally on 7 cilium_host 192.168.0.147:123 Mar 19 11:34:52.330122 ntpd[1930]: Listen normally on 8 cilium_net [fe80::b02a:1bff:fe1d:7d49%4]:123 Mar 19 11:34:52.330651 ntpd[1930]: 19 Mar 11:34:52 ntpd[1930]: Listen normally on 7 cilium_host 192.168.0.147:123 Mar 19 11:34:52.330651 ntpd[1930]: 19 Mar 11:34:52 ntpd[1930]: Listen normally on 8 cilium_net [fe80::b02a:1bff:fe1d:7d49%4]:123 Mar 19 11:34:52.330651 ntpd[1930]: 19 Mar 11:34:52 ntpd[1930]: Listen normally on 9 cilium_host [fe80::fc15:bff:fe67:e57%5]:123 Mar 19 11:34:52.330651 ntpd[1930]: 19 Mar 11:34:52 ntpd[1930]: Listen normally on 10 cilium_vxlan [fe80::2c35:b9ff:fe7c:5c1b%6]:123 Mar 19 11:34:52.330651 ntpd[1930]: 19 Mar 11:34:52 ntpd[1930]: Listen normally on 11 lxc_health [fe80::b9:25ff:fe35:58fb%8]:123 Mar 19 11:34:52.330651 ntpd[1930]: 19 Mar 11:34:52 ntpd[1930]: Listen normally on 12 lxce1496f5eb20d [fe80::8c7a:ebff:fe2e:ee2%10]:123 Mar 19 11:34:52.330651 ntpd[1930]: 19 Mar 11:34:52 ntpd[1930]: Listen normally on 13 lxc335972694806 [fe80::2071:feff:feda:29de%12]:123 Mar 19 11:34:52.330248 ntpd[1930]: Listen normally on 9 cilium_host [fe80::fc15:bff:fe67:e57%5]:123 Mar 19 11:34:52.330318 ntpd[1930]: Listen normally on 10 cilium_vxlan [fe80::2c35:b9ff:fe7c:5c1b%6]:123 Mar 19 11:34:52.330387 ntpd[1930]: Listen normally on 11 lxc_health [fe80::b9:25ff:fe35:58fb%8]:123 Mar 19 11:34:52.330457 ntpd[1930]: Listen normally on 12 lxce1496f5eb20d [fe80::8c7a:ebff:fe2e:ee2%10]:123 Mar 19 11:34:52.330525 ntpd[1930]: Listen normally on 13 lxc335972694806 [fe80::2071:feff:feda:29de%12]:123 Mar 19 11:34:55.835282 containerd[1949]: time="2025-03-19T11:34:55.834511065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:55.835282 containerd[1949]: time="2025-03-19T11:34:55.834652701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:55.835282 containerd[1949]: time="2025-03-19T11:34:55.834690345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:55.836548 containerd[1949]: time="2025-03-19T11:34:55.836093661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:55.892300 systemd[1]: Started cri-containerd-93f3a9e6e3311a4b1be127ee29d4feba0eac7d242e2d156b89aa32c3d409d0d5.scope - libcontainer container 93f3a9e6e3311a4b1be127ee29d4feba0eac7d242e2d156b89aa32c3d409d0d5. Mar 19 11:34:55.966720 containerd[1949]: time="2025-03-19T11:34:55.965129446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:34:55.966720 containerd[1949]: time="2025-03-19T11:34:55.965382946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:34:55.966720 containerd[1949]: time="2025-03-19T11:34:55.965511250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:55.966720 containerd[1949]: time="2025-03-19T11:34:55.965788210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:34:56.055530 systemd[1]: Started cri-containerd-6834e4f4539a4ad0ec6e6387b84b8a29f2b175b775fb9bfdac88d9f615c3739a.scope - libcontainer container 6834e4f4539a4ad0ec6e6387b84b8a29f2b175b775fb9bfdac88d9f615c3739a. Mar 19 11:34:56.079405 containerd[1949]: time="2025-03-19T11:34:56.079335738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v659z,Uid:171f357b-0e5b-4e68-af9d-3c235bc55648,Namespace:kube-system,Attempt:0,} returns sandbox id \"93f3a9e6e3311a4b1be127ee29d4feba0eac7d242e2d156b89aa32c3d409d0d5\"" Mar 19 11:34:56.089380 containerd[1949]: time="2025-03-19T11:34:56.088933446Z" level=info msg="CreateContainer within sandbox \"93f3a9e6e3311a4b1be127ee29d4feba0eac7d242e2d156b89aa32c3d409d0d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:34:56.117966 containerd[1949]: time="2025-03-19T11:34:56.117704767Z" level=info msg="CreateContainer within sandbox \"93f3a9e6e3311a4b1be127ee29d4feba0eac7d242e2d156b89aa32c3d409d0d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a96d6776f3b389f7d5a6c863d090c6c41c6dcab500f0e91afcef25f39054213a\"" Mar 19 11:34:56.119970 containerd[1949]: time="2025-03-19T11:34:56.119903395Z" level=info msg="StartContainer for \"a96d6776f3b389f7d5a6c863d090c6c41c6dcab500f0e91afcef25f39054213a\"" Mar 19 11:34:56.190952 containerd[1949]: time="2025-03-19T11:34:56.190886035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-clxlv,Uid:610289e2-59cb-4edc-b724-65d69ebc84a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6834e4f4539a4ad0ec6e6387b84b8a29f2b175b775fb9bfdac88d9f615c3739a\"" Mar 19 11:34:56.206204 containerd[1949]: time="2025-03-19T11:34:56.205929499Z" level=info msg="CreateContainer within sandbox \"6834e4f4539a4ad0ec6e6387b84b8a29f2b175b775fb9bfdac88d9f615c3739a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:34:56.209501 systemd[1]: Started cri-containerd-a96d6776f3b389f7d5a6c863d090c6c41c6dcab500f0e91afcef25f39054213a.scope - libcontainer container a96d6776f3b389f7d5a6c863d090c6c41c6dcab500f0e91afcef25f39054213a. Mar 19 11:34:56.238908 containerd[1949]: time="2025-03-19T11:34:56.238829287Z" level=info msg="CreateContainer within sandbox \"6834e4f4539a4ad0ec6e6387b84b8a29f2b175b775fb9bfdac88d9f615c3739a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce1070594bb590faf4a88b60ac11e454500263975a74e219e02a15e9b36b71de\"" Mar 19 11:34:56.239959 containerd[1949]: time="2025-03-19T11:34:56.239902459Z" level=info msg="StartContainer for \"ce1070594bb590faf4a88b60ac11e454500263975a74e219e02a15e9b36b71de\"" Mar 19 11:34:56.328672 systemd[1]: Started cri-containerd-ce1070594bb590faf4a88b60ac11e454500263975a74e219e02a15e9b36b71de.scope - libcontainer container ce1070594bb590faf4a88b60ac11e454500263975a74e219e02a15e9b36b71de. Mar 19 11:34:56.338792 containerd[1949]: time="2025-03-19T11:34:56.338123672Z" level=info msg="StartContainer for \"a96d6776f3b389f7d5a6c863d090c6c41c6dcab500f0e91afcef25f39054213a\" returns successfully" Mar 19 11:34:56.423421 containerd[1949]: time="2025-03-19T11:34:56.422994968Z" level=info msg="StartContainer for \"ce1070594bb590faf4a88b60ac11e454500263975a74e219e02a15e9b36b71de\" returns successfully" Mar 19 11:34:56.503398 kubelet[3434]: I0319 11:34:56.502023 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v659z" podStartSLOduration=35.502002356 podStartE2EDuration="35.502002356s" podCreationTimestamp="2025-03-19 11:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:56.496963796 +0000 UTC m=+39.576716933" watchObservedRunningTime="2025-03-19 11:34:56.502002356 +0000 UTC m=+39.581755397" Mar 19 11:34:57.483611 kubelet[3434]: I0319 11:34:57.483339 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-clxlv" podStartSLOduration=36.483290421 podStartE2EDuration="36.483290421s" podCreationTimestamp="2025-03-19 11:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:34:56.558856317 +0000 UTC m=+39.638609358" watchObservedRunningTime="2025-03-19 11:34:57.483290421 +0000 UTC m=+40.563043450" Mar 19 11:35:01.859952 systemd[1]: Started sshd@9-172.31.16.168:22-139.178.68.195:59850.service - OpenSSH per-connection server daemon (139.178.68.195:59850). Mar 19 11:35:02.038797 sshd[4797]: Accepted publickey for core from 139.178.68.195 port 59850 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:02.041476 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:02.049269 systemd-logind[1936]: New session 10 of user core. Mar 19 11:35:02.058698 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:35:02.325314 sshd[4799]: Connection closed by 139.178.68.195 port 59850 Mar 19 11:35:02.326162 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:02.331572 systemd-logind[1936]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:35:02.333132 systemd[1]: sshd@9-172.31.16.168:22-139.178.68.195:59850.service: Deactivated successfully. Mar 19 11:35:02.339372 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:35:02.343472 systemd-logind[1936]: Removed session 10. Mar 19 11:35:07.370939 systemd[1]: Started sshd@10-172.31.16.168:22-139.178.68.195:44010.service - OpenSSH per-connection server daemon (139.178.68.195:44010). Mar 19 11:35:07.568805 sshd[4812]: Accepted publickey for core from 139.178.68.195 port 44010 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:07.571306 sshd-session[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:07.580305 systemd-logind[1936]: New session 11 of user core. Mar 19 11:35:07.590523 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:35:07.852913 sshd[4815]: Connection closed by 139.178.68.195 port 44010 Mar 19 11:35:07.853804 sshd-session[4812]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:07.861279 systemd[1]: sshd@10-172.31.16.168:22-139.178.68.195:44010.service: Deactivated successfully. Mar 19 11:35:07.864935 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:35:07.867533 systemd-logind[1936]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:35:07.869893 systemd-logind[1936]: Removed session 11. Mar 19 11:35:12.892745 systemd[1]: Started sshd@11-172.31.16.168:22-139.178.68.195:44026.service - OpenSSH per-connection server daemon (139.178.68.195:44026). Mar 19 11:35:13.080127 sshd[4828]: Accepted publickey for core from 139.178.68.195 port 44026 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:13.082715 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:13.092004 systemd-logind[1936]: New session 12 of user core. Mar 19 11:35:13.098496 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:35:13.343437 sshd[4830]: Connection closed by 139.178.68.195 port 44026 Mar 19 11:35:13.344380 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:13.352377 systemd[1]: sshd@11-172.31.16.168:22-139.178.68.195:44026.service: Deactivated successfully. Mar 19 11:35:13.361682 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:35:13.368885 systemd-logind[1936]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:35:13.373223 systemd-logind[1936]: Removed session 12. Mar 19 11:35:18.386721 systemd[1]: Started sshd@12-172.31.16.168:22-139.178.68.195:50026.service - OpenSSH per-connection server daemon (139.178.68.195:50026). Mar 19 11:35:18.582495 sshd[4845]: Accepted publickey for core from 139.178.68.195 port 50026 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:18.584949 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:18.594765 systemd-logind[1936]: New session 13 of user core. Mar 19 11:35:18.601470 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:35:18.841426 sshd[4847]: Connection closed by 139.178.68.195 port 50026 Mar 19 11:35:18.842569 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:18.847651 systemd[1]: sshd@12-172.31.16.168:22-139.178.68.195:50026.service: Deactivated successfully. Mar 19 11:35:18.850840 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:35:18.855014 systemd-logind[1936]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:35:18.857536 systemd-logind[1936]: Removed session 13. Mar 19 11:35:18.884704 systemd[1]: Started sshd@13-172.31.16.168:22-139.178.68.195:50036.service - OpenSSH per-connection server daemon (139.178.68.195:50036). Mar 19 11:35:19.066965 sshd[4859]: Accepted publickey for core from 139.178.68.195 port 50036 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:19.069596 sshd-session[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:19.078825 systemd-logind[1936]: New session 14 of user core. Mar 19 11:35:19.083477 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:35:19.411275 sshd[4861]: Connection closed by 139.178.68.195 port 50036 Mar 19 11:35:19.411824 sshd-session[4859]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:19.425923 systemd[1]: sshd@13-172.31.16.168:22-139.178.68.195:50036.service: Deactivated successfully. Mar 19 11:35:19.436933 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:35:19.441069 systemd-logind[1936]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:35:19.462792 systemd[1]: Started sshd@14-172.31.16.168:22-139.178.68.195:50050.service - OpenSSH per-connection server daemon (139.178.68.195:50050). Mar 19 11:35:19.466307 systemd-logind[1936]: Removed session 14. Mar 19 11:35:19.660017 sshd[4870]: Accepted publickey for core from 139.178.68.195 port 50050 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:19.663010 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:19.672785 systemd-logind[1936]: New session 15 of user core. Mar 19 11:35:19.678472 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:35:19.935627 sshd[4873]: Connection closed by 139.178.68.195 port 50050 Mar 19 11:35:19.936670 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:19.943744 systemd[1]: sshd@14-172.31.16.168:22-139.178.68.195:50050.service: Deactivated successfully. Mar 19 11:35:19.948341 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:35:19.953056 systemd-logind[1936]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:35:19.955857 systemd-logind[1936]: Removed session 15. Mar 19 11:35:23.467666 update_engine[1937]: I20250319 11:35:23.467581 1937 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 19 11:35:23.467666 update_engine[1937]: I20250319 11:35:23.467659 1937 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 19 11:35:23.468377 update_engine[1937]: I20250319 11:35:23.467933 1937 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 19 11:35:23.468878 update_engine[1937]: I20250319 11:35:23.468817 1937 omaha_request_params.cc:62] Current group set to beta Mar 19 11:35:23.469025 update_engine[1937]: I20250319 11:35:23.468976 1937 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 19 11:35:23.469025 update_engine[1937]: I20250319 11:35:23.469010 1937 update_attempter.cc:643] Scheduling an action processor start. Mar 19 11:35:23.469131 update_engine[1937]: I20250319 11:35:23.469046 1937 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 19 11:35:23.469131 update_engine[1937]: I20250319 11:35:23.469108 1937 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 19 11:35:23.469723 update_engine[1937]: I20250319 11:35:23.469661 1937 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 19 11:35:23.469723 update_engine[1937]: I20250319 11:35:23.469707 1937 omaha_request_action.cc:272] Request: Mar 19 11:35:23.469723 update_engine[1937]: Mar 19 11:35:23.469723 update_engine[1937]: Mar 19 11:35:23.469723 update_engine[1937]: Mar 19 11:35:23.469723 update_engine[1937]: Mar 19 11:35:23.469723 update_engine[1937]: Mar 19 11:35:23.469723 update_engine[1937]: Mar 19 11:35:23.469723 update_engine[1937]: Mar 19 11:35:23.469723 update_engine[1937]: Mar 19 11:35:23.470326 update_engine[1937]: I20250319 11:35:23.469729 1937 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:35:23.470688 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 19 11:35:23.471841 update_engine[1937]: I20250319 11:35:23.471771 1937 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:35:23.472420 update_engine[1937]: I20250319 11:35:23.472361 1937 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:35:23.495924 update_engine[1937]: E20250319 11:35:23.495842 1937 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:35:23.496080 update_engine[1937]: I20250319 11:35:23.495997 1937 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 19 11:35:24.977704 systemd[1]: Started sshd@15-172.31.16.168:22-139.178.68.195:50052.service - OpenSSH per-connection server daemon (139.178.68.195:50052). Mar 19 11:35:25.161975 sshd[4888]: Accepted publickey for core from 139.178.68.195 port 50052 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:25.165078 sshd-session[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:25.173965 systemd-logind[1936]: New session 16 of user core. Mar 19 11:35:25.185447 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:35:25.427277 sshd[4890]: Connection closed by 139.178.68.195 port 50052 Mar 19 11:35:25.428099 sshd-session[4888]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:25.433646 systemd[1]: sshd@15-172.31.16.168:22-139.178.68.195:50052.service: Deactivated successfully. Mar 19 11:35:25.439607 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:35:25.443431 systemd-logind[1936]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:35:25.445724 systemd-logind[1936]: Removed session 16. Mar 19 11:35:30.468688 systemd[1]: Started sshd@16-172.31.16.168:22-139.178.68.195:48244.service - OpenSSH per-connection server daemon (139.178.68.195:48244). Mar 19 11:35:30.651931 sshd[4904]: Accepted publickey for core from 139.178.68.195 port 48244 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:30.654392 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:30.662943 systemd-logind[1936]: New session 17 of user core. Mar 19 11:35:30.675455 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:35:30.916320 sshd[4906]: Connection closed by 139.178.68.195 port 48244 Mar 19 11:35:30.917225 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:30.924477 systemd[1]: sshd@16-172.31.16.168:22-139.178.68.195:48244.service: Deactivated successfully. Mar 19 11:35:30.927887 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:35:30.929888 systemd-logind[1936]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:35:30.932692 systemd-logind[1936]: Removed session 17. Mar 19 11:35:33.467247 update_engine[1937]: I20250319 11:35:33.466626 1937 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:35:33.467247 update_engine[1937]: I20250319 11:35:33.466982 1937 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:35:33.467827 update_engine[1937]: I20250319 11:35:33.467389 1937 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:35:33.467916 update_engine[1937]: E20250319 11:35:33.467869 1937 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:35:33.467981 update_engine[1937]: I20250319 11:35:33.467960 1937 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 19 11:35:35.960795 systemd[1]: Started sshd@17-172.31.16.168:22-139.178.68.195:44034.service - OpenSSH per-connection server daemon (139.178.68.195:44034). Mar 19 11:35:36.150966 sshd[4918]: Accepted publickey for core from 139.178.68.195 port 44034 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:36.153461 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:36.163563 systemd-logind[1936]: New session 18 of user core. Mar 19 11:35:36.171474 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:35:36.411061 sshd[4920]: Connection closed by 139.178.68.195 port 44034 Mar 19 11:35:36.411962 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:36.418473 systemd[1]: sshd@17-172.31.16.168:22-139.178.68.195:44034.service: Deactivated successfully. Mar 19 11:35:36.423637 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:35:36.425158 systemd-logind[1936]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:35:36.427129 systemd-logind[1936]: Removed session 18. Mar 19 11:35:41.452718 systemd[1]: Started sshd@18-172.31.16.168:22-139.178.68.195:44048.service - OpenSSH per-connection server daemon (139.178.68.195:44048). Mar 19 11:35:41.649912 sshd[4933]: Accepted publickey for core from 139.178.68.195 port 44048 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:41.652417 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:41.662608 systemd-logind[1936]: New session 19 of user core. Mar 19 11:35:41.673472 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:35:41.911132 sshd[4935]: Connection closed by 139.178.68.195 port 44048 Mar 19 11:35:41.911003 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:41.918391 systemd[1]: sshd@18-172.31.16.168:22-139.178.68.195:44048.service: Deactivated successfully. Mar 19 11:35:41.922733 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:35:41.924720 systemd-logind[1936]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:35:41.926682 systemd-logind[1936]: Removed session 19. Mar 19 11:35:41.957811 systemd[1]: Started sshd@19-172.31.16.168:22-139.178.68.195:44056.service - OpenSSH per-connection server daemon (139.178.68.195:44056). Mar 19 11:35:42.142619 sshd[4947]: Accepted publickey for core from 139.178.68.195 port 44056 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:42.145124 sshd-session[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:42.154717 systemd-logind[1936]: New session 20 of user core. Mar 19 11:35:42.163449 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:35:42.461001 sshd[4949]: Connection closed by 139.178.68.195 port 44056 Mar 19 11:35:42.462485 sshd-session[4947]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:42.469862 systemd[1]: sshd@19-172.31.16.168:22-139.178.68.195:44056.service: Deactivated successfully. Mar 19 11:35:42.474014 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:35:42.475718 systemd-logind[1936]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:35:42.478555 systemd-logind[1936]: Removed session 20. Mar 19 11:35:42.501693 systemd[1]: Started sshd@20-172.31.16.168:22-139.178.68.195:44062.service - OpenSSH per-connection server daemon (139.178.68.195:44062). Mar 19 11:35:42.687353 sshd[4959]: Accepted publickey for core from 139.178.68.195 port 44062 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:42.689928 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:42.699945 systemd-logind[1936]: New session 21 of user core. Mar 19 11:35:42.704507 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:35:43.468486 update_engine[1937]: I20250319 11:35:43.468230 1937 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:35:43.469148 update_engine[1937]: I20250319 11:35:43.468614 1937 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:35:43.469148 update_engine[1937]: I20250319 11:35:43.468938 1937 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:35:43.470230 update_engine[1937]: E20250319 11:35:43.469534 1937 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:35:43.470230 update_engine[1937]: I20250319 11:35:43.469641 1937 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 19 11:35:44.019651 sshd[4961]: Connection closed by 139.178.68.195 port 44062 Mar 19 11:35:44.020364 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:44.032039 systemd[1]: sshd@20-172.31.16.168:22-139.178.68.195:44062.service: Deactivated successfully. Mar 19 11:35:44.041841 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:35:44.044627 systemd-logind[1936]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:35:44.070673 systemd[1]: Started sshd@21-172.31.16.168:22-139.178.68.195:44072.service - OpenSSH per-connection server daemon (139.178.68.195:44072). Mar 19 11:35:44.074481 systemd-logind[1936]: Removed session 21. Mar 19 11:35:44.271486 sshd[4976]: Accepted publickey for core from 139.178.68.195 port 44072 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:44.274297 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:44.282509 systemd-logind[1936]: New session 22 of user core. Mar 19 11:35:44.292486 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:35:44.785681 sshd[4980]: Connection closed by 139.178.68.195 port 44072 Mar 19 11:35:44.787398 sshd-session[4976]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:44.795876 systemd[1]: sshd@21-172.31.16.168:22-139.178.68.195:44072.service: Deactivated successfully. Mar 19 11:35:44.801688 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:35:44.803190 systemd-logind[1936]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:35:44.806019 systemd-logind[1936]: Removed session 22. Mar 19 11:35:44.830735 systemd[1]: Started sshd@22-172.31.16.168:22-139.178.68.195:44082.service - OpenSSH per-connection server daemon (139.178.68.195:44082). Mar 19 11:35:45.028976 sshd[4990]: Accepted publickey for core from 139.178.68.195 port 44082 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:45.031480 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:45.040544 systemd-logind[1936]: New session 23 of user core. Mar 19 11:35:45.047420 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 19 11:35:45.290520 sshd[4992]: Connection closed by 139.178.68.195 port 44082 Mar 19 11:35:45.292394 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:45.300442 systemd[1]: sshd@22-172.31.16.168:22-139.178.68.195:44082.service: Deactivated successfully. Mar 19 11:35:45.306175 systemd[1]: session-23.scope: Deactivated successfully. Mar 19 11:35:45.307808 systemd-logind[1936]: Session 23 logged out. Waiting for processes to exit. Mar 19 11:35:45.310119 systemd-logind[1936]: Removed session 23. Mar 19 11:35:50.330720 systemd[1]: Started sshd@23-172.31.16.168:22-139.178.68.195:44502.service - OpenSSH per-connection server daemon (139.178.68.195:44502). Mar 19 11:35:50.517562 sshd[5003]: Accepted publickey for core from 139.178.68.195 port 44502 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:50.520557 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:50.529330 systemd-logind[1936]: New session 24 of user core. Mar 19 11:35:50.536453 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 19 11:35:50.772887 sshd[5005]: Connection closed by 139.178.68.195 port 44502 Mar 19 11:35:50.773800 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:50.780512 systemd[1]: sshd@23-172.31.16.168:22-139.178.68.195:44502.service: Deactivated successfully. Mar 19 11:35:50.784570 systemd[1]: session-24.scope: Deactivated successfully. Mar 19 11:35:50.786694 systemd-logind[1936]: Session 24 logged out. Waiting for processes to exit. Mar 19 11:35:50.788851 systemd-logind[1936]: Removed session 24. Mar 19 11:35:53.469382 update_engine[1937]: I20250319 11:35:53.469278 1937 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:35:53.469989 update_engine[1937]: I20250319 11:35:53.469646 1937 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:35:53.470053 update_engine[1937]: I20250319 11:35:53.470008 1937 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:35:53.470563 update_engine[1937]: E20250319 11:35:53.470502 1937 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:35:53.470684 update_engine[1937]: I20250319 11:35:53.470592 1937 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 19 11:35:53.470684 update_engine[1937]: I20250319 11:35:53.470614 1937 omaha_request_action.cc:617] Omaha request response: Mar 19 11:35:53.470790 update_engine[1937]: E20250319 11:35:53.470727 1937 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 19 11:35:53.470790 update_engine[1937]: I20250319 11:35:53.470760 1937 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 19 11:35:53.470790 update_engine[1937]: I20250319 11:35:53.470777 1937 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:35:53.470932 update_engine[1937]: I20250319 11:35:53.470791 1937 update_attempter.cc:306] Processing Done. Mar 19 11:35:53.470932 update_engine[1937]: E20250319 11:35:53.470818 1937 update_attempter.cc:619] Update failed. Mar 19 11:35:53.470932 update_engine[1937]: I20250319 11:35:53.470834 1937 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 19 11:35:53.470932 update_engine[1937]: I20250319 11:35:53.470849 1937 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 19 11:35:53.470932 update_engine[1937]: I20250319 11:35:53.470865 1937 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 19 11:35:53.471187 update_engine[1937]: I20250319 11:35:53.470974 1937 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 19 11:35:53.471187 update_engine[1937]: I20250319 11:35:53.471016 1937 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 19 11:35:53.471187 update_engine[1937]: I20250319 11:35:53.471035 1937 omaha_request_action.cc:272] Request: Mar 19 11:35:53.471187 update_engine[1937]: Mar 19 11:35:53.471187 update_engine[1937]: Mar 19 11:35:53.471187 update_engine[1937]: Mar 19 11:35:53.471187 update_engine[1937]: Mar 19 11:35:53.471187 update_engine[1937]: Mar 19 11:35:53.471187 update_engine[1937]: Mar 19 11:35:53.471187 update_engine[1937]: I20250319 11:35:53.471051 1937 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 19 11:35:53.471727 update_engine[1937]: I20250319 11:35:53.471351 1937 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 19 11:35:53.471780 update_engine[1937]: I20250319 11:35:53.471729 1937 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 19 11:35:53.472370 update_engine[1937]: E20250319 11:35:53.472064 1937 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 19 11:35:53.472370 update_engine[1937]: I20250319 11:35:53.472194 1937 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 19 11:35:53.472370 update_engine[1937]: I20250319 11:35:53.472218 1937 omaha_request_action.cc:617] Omaha request response: Mar 19 11:35:53.472370 update_engine[1937]: I20250319 11:35:53.472237 1937 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:35:53.472370 update_engine[1937]: I20250319 11:35:53.472252 1937 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 19 11:35:53.472370 update_engine[1937]: I20250319 11:35:53.472267 1937 update_attempter.cc:306] Processing Done. Mar 19 11:35:53.472370 update_engine[1937]: I20250319 11:35:53.472283 1937 update_attempter.cc:310] Error event sent. Mar 19 11:35:53.472370 update_engine[1937]: I20250319 11:35:53.472304 1937 update_check_scheduler.cc:74] Next update check in 44m18s Mar 19 11:35:53.472823 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 19 11:35:53.473728 locksmithd[1971]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 19 11:35:55.820683 systemd[1]: Started sshd@24-172.31.16.168:22-139.178.68.195:43276.service - OpenSSH per-connection server daemon (139.178.68.195:43276). Mar 19 11:35:56.010356 sshd[5021]: Accepted publickey for core from 139.178.68.195 port 43276 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:35:56.012773 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:35:56.022279 systemd-logind[1936]: New session 25 of user core. Mar 19 11:35:56.030494 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 19 11:35:56.278962 sshd[5023]: Connection closed by 139.178.68.195 port 43276 Mar 19 11:35:56.277797 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Mar 19 11:35:56.283329 systemd-logind[1936]: Session 25 logged out. Waiting for processes to exit. Mar 19 11:35:56.284467 systemd[1]: sshd@24-172.31.16.168:22-139.178.68.195:43276.service: Deactivated successfully. Mar 19 11:35:56.288766 systemd[1]: session-25.scope: Deactivated successfully. Mar 19 11:35:56.292923 systemd-logind[1936]: Removed session 25. Mar 19 11:36:01.318744 systemd[1]: Started sshd@25-172.31.16.168:22-139.178.68.195:43282.service - OpenSSH per-connection server daemon (139.178.68.195:43282). Mar 19 11:36:01.508901 sshd[5034]: Accepted publickey for core from 139.178.68.195 port 43282 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:01.511907 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:01.524419 systemd-logind[1936]: New session 26 of user core. Mar 19 11:36:01.535500 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 19 11:36:01.769900 sshd[5036]: Connection closed by 139.178.68.195 port 43282 Mar 19 11:36:01.768792 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:01.775126 systemd[1]: sshd@25-172.31.16.168:22-139.178.68.195:43282.service: Deactivated successfully. Mar 19 11:36:01.780244 systemd[1]: session-26.scope: Deactivated successfully. Mar 19 11:36:01.782244 systemd-logind[1936]: Session 26 logged out. Waiting for processes to exit. Mar 19 11:36:01.783980 systemd-logind[1936]: Removed session 26. Mar 19 11:36:06.809721 systemd[1]: Started sshd@26-172.31.16.168:22-139.178.68.195:55536.service - OpenSSH per-connection server daemon (139.178.68.195:55536). Mar 19 11:36:07.003833 sshd[5048]: Accepted publickey for core from 139.178.68.195 port 55536 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:07.006361 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:07.015495 systemd-logind[1936]: New session 27 of user core. Mar 19 11:36:07.022852 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 19 11:36:07.269330 sshd[5050]: Connection closed by 139.178.68.195 port 55536 Mar 19 11:36:07.270246 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:07.276823 systemd[1]: sshd@26-172.31.16.168:22-139.178.68.195:55536.service: Deactivated successfully. Mar 19 11:36:07.281376 systemd[1]: session-27.scope: Deactivated successfully. Mar 19 11:36:07.282947 systemd-logind[1936]: Session 27 logged out. Waiting for processes to exit. Mar 19 11:36:07.285401 systemd-logind[1936]: Removed session 27. Mar 19 11:36:07.315908 systemd[1]: Started sshd@27-172.31.16.168:22-139.178.68.195:55548.service - OpenSSH per-connection server daemon (139.178.68.195:55548). Mar 19 11:36:07.502947 sshd[5062]: Accepted publickey for core from 139.178.68.195 port 55548 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:07.505700 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:07.518676 systemd-logind[1936]: New session 28 of user core. Mar 19 11:36:07.525095 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 19 11:36:09.742325 containerd[1949]: time="2025-03-19T11:36:09.742049360Z" level=info msg="StopContainer for \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\" with timeout 30 (s)" Mar 19 11:36:09.745940 containerd[1949]: time="2025-03-19T11:36:09.745290572Z" level=info msg="Stop container \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\" with signal terminated" Mar 19 11:36:09.780342 systemd[1]: cri-containerd-c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670.scope: Deactivated successfully. Mar 19 11:36:09.808470 containerd[1949]: time="2025-03-19T11:36:09.808111521Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:36:09.823198 containerd[1949]: time="2025-03-19T11:36:09.822526833Z" level=info msg="StopContainer for \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\" with timeout 2 (s)" Mar 19 11:36:09.823673 containerd[1949]: time="2025-03-19T11:36:09.823633125Z" level=info msg="Stop container \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\" with signal terminated" Mar 19 11:36:09.841612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670-rootfs.mount: Deactivated successfully. Mar 19 11:36:09.845828 systemd-networkd[1866]: lxc_health: Link DOWN Mar 19 11:36:09.845847 systemd-networkd[1866]: lxc_health: Lost carrier Mar 19 11:36:09.857224 containerd[1949]: time="2025-03-19T11:36:09.856382673Z" level=info msg="shim disconnected" id=c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670 namespace=k8s.io Mar 19 11:36:09.857224 containerd[1949]: time="2025-03-19T11:36:09.856576341Z" level=warning msg="cleaning up after shim disconnected" id=c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670 namespace=k8s.io Mar 19 11:36:09.857224 containerd[1949]: time="2025-03-19T11:36:09.856626345Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:09.881981 systemd[1]: cri-containerd-1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472.scope: Deactivated successfully. Mar 19 11:36:09.882960 systemd[1]: cri-containerd-1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472.scope: Consumed 14.381s CPU time, 127.5M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:36:09.910814 containerd[1949]: time="2025-03-19T11:36:09.910543017Z" level=info msg="StopContainer for \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\" returns successfully" Mar 19 11:36:09.913885 containerd[1949]: time="2025-03-19T11:36:09.913450881Z" level=info msg="StopPodSandbox for \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\"" Mar 19 11:36:09.913885 containerd[1949]: time="2025-03-19T11:36:09.913526637Z" level=info msg="Container to stop \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:09.920056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f-shm.mount: Deactivated successfully. Mar 19 11:36:09.939271 systemd[1]: cri-containerd-4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f.scope: Deactivated successfully. Mar 19 11:36:09.956793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472-rootfs.mount: Deactivated successfully. Mar 19 11:36:09.961369 containerd[1949]: time="2025-03-19T11:36:09.958879473Z" level=info msg="shim disconnected" id=1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472 namespace=k8s.io Mar 19 11:36:09.961369 containerd[1949]: time="2025-03-19T11:36:09.961339497Z" level=warning msg="cleaning up after shim disconnected" id=1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472 namespace=k8s.io Mar 19 11:36:09.961369 containerd[1949]: time="2025-03-19T11:36:09.961369701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:10.003284 containerd[1949]: time="2025-03-19T11:36:10.002765970Z" level=info msg="StopContainer for \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\" returns successfully" Mar 19 11:36:10.003627 containerd[1949]: time="2025-03-19T11:36:10.003422922Z" level=info msg="StopPodSandbox for \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\"" Mar 19 11:36:10.003627 containerd[1949]: time="2025-03-19T11:36:10.003496014Z" level=info msg="Container to stop \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.003627 containerd[1949]: time="2025-03-19T11:36:10.003522786Z" level=info msg="Container to stop \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.003627 containerd[1949]: time="2025-03-19T11:36:10.003543402Z" level=info msg="Container to stop \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.003627 containerd[1949]: time="2025-03-19T11:36:10.003567150Z" level=info msg="Container to stop \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.003627 containerd[1949]: time="2025-03-19T11:36:10.003588246Z" level=info msg="Container to stop \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 19 11:36:10.017771 containerd[1949]: time="2025-03-19T11:36:10.017389878Z" level=info msg="shim disconnected" id=4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f namespace=k8s.io Mar 19 11:36:10.017771 containerd[1949]: time="2025-03-19T11:36:10.017517570Z" level=warning msg="cleaning up after shim disconnected" id=4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f namespace=k8s.io Mar 19 11:36:10.017771 containerd[1949]: time="2025-03-19T11:36:10.017582898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:10.024150 systemd[1]: cri-containerd-84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df.scope: Deactivated successfully. Mar 19 11:36:10.055591 containerd[1949]: time="2025-03-19T11:36:10.055523946Z" level=info msg="TearDown network for sandbox \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\" successfully" Mar 19 11:36:10.055591 containerd[1949]: time="2025-03-19T11:36:10.055578354Z" level=info msg="StopPodSandbox for \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\" returns successfully" Mar 19 11:36:10.090560 containerd[1949]: time="2025-03-19T11:36:10.089932914Z" level=info msg="shim disconnected" id=84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df namespace=k8s.io Mar 19 11:36:10.090560 containerd[1949]: time="2025-03-19T11:36:10.090016818Z" level=warning msg="cleaning up after shim disconnected" id=84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df namespace=k8s.io Mar 19 11:36:10.090560 containerd[1949]: time="2025-03-19T11:36:10.090039246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:10.113007 containerd[1949]: time="2025-03-19T11:36:10.112800306Z" level=info msg="TearDown network for sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" successfully" Mar 19 11:36:10.113007 containerd[1949]: time="2025-03-19T11:36:10.112863822Z" level=info msg="StopPodSandbox for \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" returns successfully" Mar 19 11:36:10.186546 kubelet[3434]: I0319 11:36:10.185897 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-cgroup\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.186546 kubelet[3434]: I0319 11:36:10.185975 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-clustermesh-secrets\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.186546 kubelet[3434]: I0319 11:36:10.186015 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-lib-modules\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.186546 kubelet[3434]: I0319 11:36:10.186013 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.186546 kubelet[3434]: I0319 11:36:10.186052 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-bpf-maps\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.186546 kubelet[3434]: I0319 11:36:10.186087 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-etc-cni-netd\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187438 kubelet[3434]: I0319 11:36:10.186124 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-host-proc-sys-net\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187438 kubelet[3434]: I0319 11:36:10.186159 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cni-path\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187438 kubelet[3434]: I0319 11:36:10.186215 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-host-proc-sys-kernel\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187438 kubelet[3434]: I0319 11:36:10.186256 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-run\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187438 kubelet[3434]: I0319 11:36:10.186295 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzw27\" (UniqueName: \"kubernetes.io/projected/8c1448f8-47d8-4eee-9db3-5114bbc55e18-kube-api-access-fzw27\") pod \"8c1448f8-47d8-4eee-9db3-5114bbc55e18\" (UID: \"8c1448f8-47d8-4eee-9db3-5114bbc55e18\") " Mar 19 11:36:10.187438 kubelet[3434]: I0319 11:36:10.186338 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc4wt\" (UniqueName: \"kubernetes.io/projected/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-kube-api-access-hc4wt\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187750 kubelet[3434]: I0319 11:36:10.186373 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-xtables-lock\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187750 kubelet[3434]: I0319 11:36:10.186411 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-hubble-tls\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187750 kubelet[3434]: I0319 11:36:10.186450 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c1448f8-47d8-4eee-9db3-5114bbc55e18-cilium-config-path\") pod \"8c1448f8-47d8-4eee-9db3-5114bbc55e18\" (UID: \"8c1448f8-47d8-4eee-9db3-5114bbc55e18\") " Mar 19 11:36:10.187750 kubelet[3434]: I0319 11:36:10.186491 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-config-path\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187750 kubelet[3434]: I0319 11:36:10.186530 3434 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-hostproc\") pod \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\" (UID: \"8f6c5844-fc22-4cc2-8edb-648a4fa4d836\") " Mar 19 11:36:10.187750 kubelet[3434]: I0319 11:36:10.186672 3434 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-cgroup\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.188062 kubelet[3434]: I0319 11:36:10.187103 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.188062 kubelet[3434]: I0319 11:36:10.187149 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.188062 kubelet[3434]: I0319 11:36:10.187245 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.188062 kubelet[3434]: I0319 11:36:10.187282 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.188062 kubelet[3434]: I0319 11:36:10.187318 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cni-path" (OuterVolumeSpecName: "cni-path") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.189768 kubelet[3434]: I0319 11:36:10.187353 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.189768 kubelet[3434]: I0319 11:36:10.187390 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.191738 kubelet[3434]: I0319 11:36:10.191343 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.192423 kubelet[3434]: I0319 11:36:10.192132 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-hostproc" (OuterVolumeSpecName: "hostproc") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 19 11:36:10.199531 kubelet[3434]: I0319 11:36:10.199322 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 19 11:36:10.199531 kubelet[3434]: I0319 11:36:10.199477 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1448f8-47d8-4eee-9db3-5114bbc55e18-kube-api-access-fzw27" (OuterVolumeSpecName: "kube-api-access-fzw27") pod "8c1448f8-47d8-4eee-9db3-5114bbc55e18" (UID: "8c1448f8-47d8-4eee-9db3-5114bbc55e18"). InnerVolumeSpecName "kube-api-access-fzw27". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:36:10.202324 kubelet[3434]: I0319 11:36:10.202250 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c1448f8-47d8-4eee-9db3-5114bbc55e18-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8c1448f8-47d8-4eee-9db3-5114bbc55e18" (UID: "8c1448f8-47d8-4eee-9db3-5114bbc55e18"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 19 11:36:10.204012 kubelet[3434]: I0319 11:36:10.203919 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:36:10.206873 kubelet[3434]: I0319 11:36:10.206801 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-kube-api-access-hc4wt" (OuterVolumeSpecName: "kube-api-access-hc4wt") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "kube-api-access-hc4wt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 19 11:36:10.208144 kubelet[3434]: I0319 11:36:10.208084 3434 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f6c5844-fc22-4cc2-8edb-648a4fa4d836" (UID: "8f6c5844-fc22-4cc2-8edb-648a4fa4d836"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 19 11:36:10.287281 kubelet[3434]: I0319 11:36:10.287127 3434 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hc4wt\" (UniqueName: \"kubernetes.io/projected/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-kube-api-access-hc4wt\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287281 kubelet[3434]: I0319 11:36:10.287201 3434 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8c1448f8-47d8-4eee-9db3-5114bbc55e18-cilium-config-path\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287281 kubelet[3434]: I0319 11:36:10.287227 3434 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-xtables-lock\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287281 kubelet[3434]: I0319 11:36:10.287249 3434 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-hubble-tls\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287281 kubelet[3434]: I0319 11:36:10.287272 3434 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-config-path\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287625 kubelet[3434]: I0319 11:36:10.287293 3434 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-hostproc\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287625 kubelet[3434]: I0319 11:36:10.287314 3434 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-lib-modules\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287625 kubelet[3434]: I0319 11:36:10.287338 3434 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-clustermesh-secrets\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287625 kubelet[3434]: I0319 11:36:10.287359 3434 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-bpf-maps\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287625 kubelet[3434]: I0319 11:36:10.287381 3434 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-host-proc-sys-kernel\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287625 kubelet[3434]: I0319 11:36:10.287402 3434 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-etc-cni-netd\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287625 kubelet[3434]: I0319 11:36:10.287423 3434 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-host-proc-sys-net\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.287625 kubelet[3434]: I0319 11:36:10.287442 3434 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cni-path\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.288545 kubelet[3434]: I0319 11:36:10.287465 3434 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fzw27\" (UniqueName: \"kubernetes.io/projected/8c1448f8-47d8-4eee-9db3-5114bbc55e18-kube-api-access-fzw27\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.288545 kubelet[3434]: I0319 11:36:10.287489 3434 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f6c5844-fc22-4cc2-8edb-648a4fa4d836-cilium-run\") on node \"ip-172-31-16-168\" DevicePath \"\"" Mar 19 11:36:10.652877 kubelet[3434]: I0319 11:36:10.651865 3434 scope.go:117] "RemoveContainer" containerID="1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472" Mar 19 11:36:10.656650 containerd[1949]: time="2025-03-19T11:36:10.656597661Z" level=info msg="RemoveContainer for \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\"" Mar 19 11:36:10.673049 containerd[1949]: time="2025-03-19T11:36:10.672984069Z" level=info msg="RemoveContainer for \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\" returns successfully" Mar 19 11:36:10.674349 systemd[1]: Removed slice kubepods-burstable-pod8f6c5844_fc22_4cc2_8edb_648a4fa4d836.slice - libcontainer container kubepods-burstable-pod8f6c5844_fc22_4cc2_8edb_648a4fa4d836.slice. Mar 19 11:36:10.674577 systemd[1]: kubepods-burstable-pod8f6c5844_fc22_4cc2_8edb_648a4fa4d836.slice: Consumed 14.535s CPU time, 127.9M memory peak, 136K read from disk, 12.9M written to disk. Mar 19 11:36:10.676152 kubelet[3434]: I0319 11:36:10.676001 3434 scope.go:117] "RemoveContainer" containerID="cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660" Mar 19 11:36:10.683129 containerd[1949]: time="2025-03-19T11:36:10.681663417Z" level=info msg="RemoveContainer for \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\"" Mar 19 11:36:10.683509 systemd[1]: Removed slice kubepods-besteffort-pod8c1448f8_47d8_4eee_9db3_5114bbc55e18.slice - libcontainer container kubepods-besteffort-pod8c1448f8_47d8_4eee_9db3_5114bbc55e18.slice. Mar 19 11:36:10.691987 containerd[1949]: time="2025-03-19T11:36:10.691862073Z" level=info msg="RemoveContainer for \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\" returns successfully" Mar 19 11:36:10.692662 kubelet[3434]: I0319 11:36:10.692221 3434 scope.go:117] "RemoveContainer" containerID="f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2" Mar 19 11:36:10.695579 containerd[1949]: time="2025-03-19T11:36:10.695456817Z" level=info msg="RemoveContainer for \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\"" Mar 19 11:36:10.705648 containerd[1949]: time="2025-03-19T11:36:10.705514149Z" level=info msg="RemoveContainer for \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\" returns successfully" Mar 19 11:36:10.707618 kubelet[3434]: I0319 11:36:10.706290 3434 scope.go:117] "RemoveContainer" containerID="ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a" Mar 19 11:36:10.711106 containerd[1949]: time="2025-03-19T11:36:10.710656977Z" level=info msg="RemoveContainer for \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\"" Mar 19 11:36:10.717637 containerd[1949]: time="2025-03-19T11:36:10.717585321Z" level=info msg="RemoveContainer for \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\" returns successfully" Mar 19 11:36:10.718382 kubelet[3434]: I0319 11:36:10.718218 3434 scope.go:117] "RemoveContainer" containerID="d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4" Mar 19 11:36:10.721668 containerd[1949]: time="2025-03-19T11:36:10.721163949Z" level=info msg="RemoveContainer for \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\"" Mar 19 11:36:10.729008 containerd[1949]: time="2025-03-19T11:36:10.728761377Z" level=info msg="RemoveContainer for \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\" returns successfully" Mar 19 11:36:10.729896 kubelet[3434]: I0319 11:36:10.729873 3434 scope.go:117] "RemoveContainer" containerID="1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472" Mar 19 11:36:10.731035 containerd[1949]: time="2025-03-19T11:36:10.730834437Z" level=error msg="ContainerStatus for \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\": not found" Mar 19 11:36:10.731445 kubelet[3434]: E0319 11:36:10.731134 3434 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\": not found" containerID="1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472" Mar 19 11:36:10.732328 kubelet[3434]: I0319 11:36:10.731760 3434 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472"} err="failed to get container status \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\": rpc error: code = NotFound desc = an error occurred when try to find container \"1df7814366d8f167fa67be8c1d9bb56a344c00dbfeab6bc68d39e936a4e71472\": not found" Mar 19 11:36:10.732328 kubelet[3434]: I0319 11:36:10.732159 3434 scope.go:117] "RemoveContainer" containerID="cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660" Mar 19 11:36:10.734155 containerd[1949]: time="2025-03-19T11:36:10.733930365Z" level=error msg="ContainerStatus for \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\": not found" Mar 19 11:36:10.734946 kubelet[3434]: E0319 11:36:10.734657 3434 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\": not found" containerID="cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660" Mar 19 11:36:10.734946 kubelet[3434]: I0319 11:36:10.734717 3434 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660"} err="failed to get container status \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbcfee5f27a60d1f822eaa10cec88d4912c6a020d9fc64527d610872b1ee7660\": not found" Mar 19 11:36:10.734946 kubelet[3434]: I0319 11:36:10.734757 3434 scope.go:117] "RemoveContainer" containerID="f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2" Mar 19 11:36:10.735743 containerd[1949]: time="2025-03-19T11:36:10.735680385Z" level=error msg="ContainerStatus for \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\": not found" Mar 19 11:36:10.736264 kubelet[3434]: E0319 11:36:10.736216 3434 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\": not found" containerID="f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2" Mar 19 11:36:10.736442 kubelet[3434]: I0319 11:36:10.736276 3434 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2"} err="failed to get container status \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7f25a6a8ce73ef860f180e9087e31e6df77a6ba69cbce41f5552d3d3fbdcff2\": not found" Mar 19 11:36:10.736442 kubelet[3434]: I0319 11:36:10.736317 3434 scope.go:117] "RemoveContainer" containerID="ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a" Mar 19 11:36:10.736768 containerd[1949]: time="2025-03-19T11:36:10.736709433Z" level=error msg="ContainerStatus for \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\": not found" Mar 19 11:36:10.737225 kubelet[3434]: E0319 11:36:10.736999 3434 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\": not found" containerID="ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a" Mar 19 11:36:10.737225 kubelet[3434]: I0319 11:36:10.737048 3434 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a"} err="failed to get container status \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef8c1b8cf614c0711fb94e322fcd2202b5bfa575a889ead8b018d3bffd41ea1a\": not found" Mar 19 11:36:10.737225 kubelet[3434]: I0319 11:36:10.737082 3434 scope.go:117] "RemoveContainer" containerID="d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4" Mar 19 11:36:10.737635 containerd[1949]: time="2025-03-19T11:36:10.737472081Z" level=error msg="ContainerStatus for \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\": not found" Mar 19 11:36:10.737912 kubelet[3434]: E0319 11:36:10.737834 3434 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\": not found" containerID="d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4" Mar 19 11:36:10.738203 kubelet[3434]: I0319 11:36:10.737881 3434 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4"} err="failed to get container status \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5a01c2e8221a00ff2d3d18efd8598392ec1df574bc312e2eba149d44183cac4\": not found" Mar 19 11:36:10.738203 kubelet[3434]: I0319 11:36:10.738035 3434 scope.go:117] "RemoveContainer" containerID="c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670" Mar 19 11:36:10.740800 containerd[1949]: time="2025-03-19T11:36:10.740707209Z" level=info msg="RemoveContainer for \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\"" Mar 19 11:36:10.747313 containerd[1949]: time="2025-03-19T11:36:10.747263457Z" level=info msg="RemoveContainer for \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\" returns successfully" Mar 19 11:36:10.748405 kubelet[3434]: I0319 11:36:10.748355 3434 scope.go:117] "RemoveContainer" containerID="c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670" Mar 19 11:36:10.748849 containerd[1949]: time="2025-03-19T11:36:10.748784145Z" level=error msg="ContainerStatus for \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\": not found" Mar 19 11:36:10.749268 kubelet[3434]: E0319 11:36:10.749051 3434 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\": not found" containerID="c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670" Mar 19 11:36:10.749268 kubelet[3434]: I0319 11:36:10.749099 3434 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670"} err="failed to get container status \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1219d3d57c67e7a5c6702ccdd3aadccb0598a3ad906d549b323758802cbd670\": not found" Mar 19 11:36:10.767510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f-rootfs.mount: Deactivated successfully. Mar 19 11:36:10.767696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df-rootfs.mount: Deactivated successfully. Mar 19 11:36:10.767838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df-shm.mount: Deactivated successfully. Mar 19 11:36:10.767983 systemd[1]: var-lib-kubelet-pods-8c1448f8\x2d47d8\x2d4eee\x2d9db3\x2d5114bbc55e18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzw27.mount: Deactivated successfully. Mar 19 11:36:10.768141 systemd[1]: var-lib-kubelet-pods-8f6c5844\x2dfc22\x2d4cc2\x2d8edb\x2d648a4fa4d836-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhc4wt.mount: Deactivated successfully. Mar 19 11:36:10.768343 systemd[1]: var-lib-kubelet-pods-8f6c5844\x2dfc22\x2d4cc2\x2d8edb\x2d648a4fa4d836-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 19 11:36:10.768495 systemd[1]: var-lib-kubelet-pods-8f6c5844\x2dfc22\x2d4cc2\x2d8edb\x2d648a4fa4d836-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 19 11:36:11.161505 kubelet[3434]: I0319 11:36:11.160802 3434 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c1448f8-47d8-4eee-9db3-5114bbc55e18" path="/var/lib/kubelet/pods/8c1448f8-47d8-4eee-9db3-5114bbc55e18/volumes" Mar 19 11:36:11.161856 kubelet[3434]: I0319 11:36:11.161808 3434 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f6c5844-fc22-4cc2-8edb-648a4fa4d836" path="/var/lib/kubelet/pods/8f6c5844-fc22-4cc2-8edb-648a4fa4d836/volumes" Mar 19 11:36:11.667820 sshd[5064]: Connection closed by 139.178.68.195 port 55548 Mar 19 11:36:11.668743 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:11.674524 systemd[1]: sshd@27-172.31.16.168:22-139.178.68.195:55548.service: Deactivated successfully. Mar 19 11:36:11.678049 systemd[1]: session-28.scope: Deactivated successfully. Mar 19 11:36:11.678612 systemd[1]: session-28.scope: Consumed 1.438s CPU time, 21.6M memory peak. Mar 19 11:36:11.681928 systemd-logind[1936]: Session 28 logged out. Waiting for processes to exit. Mar 19 11:36:11.685225 systemd-logind[1936]: Removed session 28. Mar 19 11:36:11.705654 systemd[1]: Started sshd@28-172.31.16.168:22-139.178.68.195:55556.service - OpenSSH per-connection server daemon (139.178.68.195:55556). Mar 19 11:36:11.894923 sshd[5222]: Accepted publickey for core from 139.178.68.195 port 55556 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:11.897681 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:11.907386 systemd-logind[1936]: New session 29 of user core. Mar 19 11:36:11.914454 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 19 11:36:12.329970 ntpd[1930]: Deleting interface #11 lxc_health, fe80::b9:25ff:fe35:58fb%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Mar 19 11:36:12.330622 ntpd[1930]: 19 Mar 11:36:12 ntpd[1930]: Deleting interface #11 lxc_health, fe80::b9:25ff:fe35:58fb%8#123, interface stats: received=0, sent=0, dropped=0, active_time=80 secs Mar 19 11:36:12.409018 kubelet[3434]: E0319 11:36:12.408950 3434 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:36:12.872464 sshd[5224]: Connection closed by 139.178.68.195 port 55556 Mar 19 11:36:12.874458 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:12.885057 systemd[1]: sshd@28-172.31.16.168:22-139.178.68.195:55556.service: Deactivated successfully. Mar 19 11:36:12.885686 systemd-logind[1936]: Session 29 logged out. Waiting for processes to exit. Mar 19 11:36:12.890969 systemd[1]: session-29.scope: Deactivated successfully. Mar 19 11:36:12.914973 systemd-logind[1936]: Removed session 29. Mar 19 11:36:12.927772 systemd[1]: Started sshd@29-172.31.16.168:22-139.178.68.195:55564.service - OpenSSH per-connection server daemon (139.178.68.195:55564). Mar 19 11:36:12.984196 kubelet[3434]: I0319 11:36:12.983663 3434 memory_manager.go:355] "RemoveStaleState removing state" podUID="8f6c5844-fc22-4cc2-8edb-648a4fa4d836" containerName="cilium-agent" Mar 19 11:36:12.986224 kubelet[3434]: I0319 11:36:12.986139 3434 memory_manager.go:355] "RemoveStaleState removing state" podUID="8c1448f8-47d8-4eee-9db3-5114bbc55e18" containerName="cilium-operator" Mar 19 11:36:13.004724 systemd[1]: Created slice kubepods-burstable-pod5df01ba7_f672_48cd_87d3_270894352588.slice - libcontainer container kubepods-burstable-pod5df01ba7_f672_48cd_87d3_270894352588.slice. Mar 19 11:36:13.103746 kubelet[3434]: I0319 11:36:13.103697 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-cilium-run\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.104308 kubelet[3434]: I0319 11:36:13.104268 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-bpf-maps\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.104487 kubelet[3434]: I0319 11:36:13.104461 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-cilium-cgroup\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.104659 kubelet[3434]: I0319 11:36:13.104633 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-xtables-lock\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.104817 kubelet[3434]: I0319 11:36:13.104791 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-host-proc-sys-kernel\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.106013 kubelet[3434]: I0319 11:36:13.104938 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757pr\" (UniqueName: \"kubernetes.io/projected/5df01ba7-f672-48cd-87d3-270894352588-kube-api-access-757pr\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.106273 kubelet[3434]: I0319 11:36:13.106239 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-etc-cni-netd\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.106886 kubelet[3434]: I0319 11:36:13.106388 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-lib-modules\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.106886 kubelet[3434]: I0319 11:36:13.106506 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5df01ba7-f672-48cd-87d3-270894352588-cilium-ipsec-secrets\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.106886 kubelet[3434]: I0319 11:36:13.106567 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-host-proc-sys-net\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.106886 kubelet[3434]: I0319 11:36:13.106612 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5df01ba7-f672-48cd-87d3-270894352588-hubble-tls\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.106886 kubelet[3434]: I0319 11:36:13.106652 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5df01ba7-f672-48cd-87d3-270894352588-clustermesh-secrets\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.107193 kubelet[3434]: I0319 11:36:13.106688 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5df01ba7-f672-48cd-87d3-270894352588-cilium-config-path\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.107193 kubelet[3434]: I0319 11:36:13.106746 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-hostproc\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.107193 kubelet[3434]: I0319 11:36:13.106800 3434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5df01ba7-f672-48cd-87d3-270894352588-cni-path\") pod \"cilium-9nzzd\" (UID: \"5df01ba7-f672-48cd-87d3-270894352588\") " pod="kube-system/cilium-9nzzd" Mar 19 11:36:13.132411 sshd[5235]: Accepted publickey for core from 139.178.68.195 port 55564 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:13.134059 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:13.148580 systemd-logind[1936]: New session 30 of user core. Mar 19 11:36:13.156845 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 19 11:36:13.295948 sshd[5238]: Connection closed by 139.178.68.195 port 55564 Mar 19 11:36:13.296422 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:13.302900 systemd[1]: sshd@29-172.31.16.168:22-139.178.68.195:55564.service: Deactivated successfully. Mar 19 11:36:13.307471 systemd[1]: session-30.scope: Deactivated successfully. Mar 19 11:36:13.311386 systemd-logind[1936]: Session 30 logged out. Waiting for processes to exit. Mar 19 11:36:13.314979 containerd[1949]: time="2025-03-19T11:36:13.314728198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9nzzd,Uid:5df01ba7-f672-48cd-87d3-270894352588,Namespace:kube-system,Attempt:0,}" Mar 19 11:36:13.315290 systemd-logind[1936]: Removed session 30. Mar 19 11:36:13.347270 systemd[1]: Started sshd@30-172.31.16.168:22-139.178.68.195:55570.service - OpenSSH per-connection server daemon (139.178.68.195:55570). Mar 19 11:36:13.369204 containerd[1949]: time="2025-03-19T11:36:13.368901094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:36:13.369204 containerd[1949]: time="2025-03-19T11:36:13.369093922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:36:13.369668 containerd[1949]: time="2025-03-19T11:36:13.369434962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:13.373186 containerd[1949]: time="2025-03-19T11:36:13.371488738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:36:13.420475 systemd[1]: Started cri-containerd-c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8.scope - libcontainer container c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8. Mar 19 11:36:13.468980 containerd[1949]: time="2025-03-19T11:36:13.468889007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9nzzd,Uid:5df01ba7-f672-48cd-87d3-270894352588,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\"" Mar 19 11:36:13.476390 containerd[1949]: time="2025-03-19T11:36:13.476237543Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 19 11:36:13.501503 containerd[1949]: time="2025-03-19T11:36:13.501220103Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"34ff06bb91476012f4733cc87443020891cc23069fa5453cb152c3b597c6cdc5\"" Mar 19 11:36:13.503188 containerd[1949]: time="2025-03-19T11:36:13.503005259Z" level=info msg="StartContainer for \"34ff06bb91476012f4733cc87443020891cc23069fa5453cb152c3b597c6cdc5\"" Mar 19 11:36:13.558486 systemd[1]: Started cri-containerd-34ff06bb91476012f4733cc87443020891cc23069fa5453cb152c3b597c6cdc5.scope - libcontainer container 34ff06bb91476012f4733cc87443020891cc23069fa5453cb152c3b597c6cdc5. Mar 19 11:36:13.564695 sshd[5250]: Accepted publickey for core from 139.178.68.195 port 55570 ssh2: RSA SHA256:cKyZTObh1iONCreCvRKgPrHCULmkB+BpkmrYHGvOaD0 Mar 19 11:36:13.569142 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:36:13.578722 systemd-logind[1936]: New session 31 of user core. Mar 19 11:36:13.586443 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 19 11:36:13.622612 containerd[1949]: time="2025-03-19T11:36:13.622529208Z" level=info msg="StartContainer for \"34ff06bb91476012f4733cc87443020891cc23069fa5453cb152c3b597c6cdc5\" returns successfully" Mar 19 11:36:13.635503 systemd[1]: cri-containerd-34ff06bb91476012f4733cc87443020891cc23069fa5453cb152c3b597c6cdc5.scope: Deactivated successfully. Mar 19 11:36:13.703766 containerd[1949]: time="2025-03-19T11:36:13.703569528Z" level=info msg="shim disconnected" id=34ff06bb91476012f4733cc87443020891cc23069fa5453cb152c3b597c6cdc5 namespace=k8s.io Mar 19 11:36:13.703766 containerd[1949]: time="2025-03-19T11:36:13.703657704Z" level=warning msg="cleaning up after shim disconnected" id=34ff06bb91476012f4733cc87443020891cc23069fa5453cb152c3b597c6cdc5 namespace=k8s.io Mar 19 11:36:13.703766 containerd[1949]: time="2025-03-19T11:36:13.703678620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:14.156425 kubelet[3434]: E0319 11:36:14.156353 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-clxlv" podUID="610289e2-59cb-4edc-b724-65d69ebc84a4" Mar 19 11:36:14.692993 containerd[1949]: time="2025-03-19T11:36:14.691931653Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 19 11:36:14.724003 containerd[1949]: time="2025-03-19T11:36:14.722762413Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf\"" Mar 19 11:36:14.725013 containerd[1949]: time="2025-03-19T11:36:14.724870357Z" level=info msg="StartContainer for \"608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf\"" Mar 19 11:36:14.785522 systemd[1]: Started cri-containerd-608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf.scope - libcontainer container 608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf. Mar 19 11:36:14.833862 containerd[1949]: time="2025-03-19T11:36:14.833789498Z" level=info msg="StartContainer for \"608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf\" returns successfully" Mar 19 11:36:14.847739 systemd[1]: cri-containerd-608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf.scope: Deactivated successfully. Mar 19 11:36:14.892384 containerd[1949]: time="2025-03-19T11:36:14.892143806Z" level=info msg="shim disconnected" id=608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf namespace=k8s.io Mar 19 11:36:14.892384 containerd[1949]: time="2025-03-19T11:36:14.892250426Z" level=warning msg="cleaning up after shim disconnected" id=608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf namespace=k8s.io Mar 19 11:36:14.892384 containerd[1949]: time="2025-03-19T11:36:14.892271378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:15.219237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-608217f1718b337b41ecff9a0fa164a0e27dd4444298ddbe00ba114a10c358bf-rootfs.mount: Deactivated successfully. Mar 19 11:36:15.697297 containerd[1949]: time="2025-03-19T11:36:15.696601634Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 19 11:36:15.737478 containerd[1949]: time="2025-03-19T11:36:15.737341010Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454\"" Mar 19 11:36:15.739491 containerd[1949]: time="2025-03-19T11:36:15.739312322Z" level=info msg="StartContainer for \"3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454\"" Mar 19 11:36:15.796502 systemd[1]: Started cri-containerd-3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454.scope - libcontainer container 3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454. Mar 19 11:36:15.857996 containerd[1949]: time="2025-03-19T11:36:15.856429191Z" level=info msg="StartContainer for \"3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454\" returns successfully" Mar 19 11:36:15.863579 systemd[1]: cri-containerd-3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454.scope: Deactivated successfully. Mar 19 11:36:15.917384 containerd[1949]: time="2025-03-19T11:36:15.917249811Z" level=info msg="shim disconnected" id=3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454 namespace=k8s.io Mar 19 11:36:15.917384 containerd[1949]: time="2025-03-19T11:36:15.917346675Z" level=warning msg="cleaning up after shim disconnected" id=3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454 namespace=k8s.io Mar 19 11:36:15.917879 containerd[1949]: time="2025-03-19T11:36:15.917391699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:16.156403 kubelet[3434]: E0319 11:36:16.156282 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-clxlv" podUID="610289e2-59cb-4edc-b724-65d69ebc84a4" Mar 19 11:36:16.219925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c1980e1dda7ceab863e234f52a15f52c117fc1417a5bf3172fce9931f093454-rootfs.mount: Deactivated successfully. Mar 19 11:36:16.707028 containerd[1949]: time="2025-03-19T11:36:16.702820503Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 19 11:36:16.732491 containerd[1949]: time="2025-03-19T11:36:16.732042831Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb\"" Mar 19 11:36:16.739746 containerd[1949]: time="2025-03-19T11:36:16.734451483Z" level=info msg="StartContainer for \"5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb\"" Mar 19 11:36:16.811496 systemd[1]: Started cri-containerd-5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb.scope - libcontainer container 5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb. Mar 19 11:36:16.858288 systemd[1]: cri-containerd-5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb.scope: Deactivated successfully. Mar 19 11:36:16.865758 containerd[1949]: time="2025-03-19T11:36:16.865681636Z" level=info msg="StartContainer for \"5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb\" returns successfully" Mar 19 11:36:16.905903 containerd[1949]: time="2025-03-19T11:36:16.905826328Z" level=info msg="shim disconnected" id=5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb namespace=k8s.io Mar 19 11:36:16.906386 containerd[1949]: time="2025-03-19T11:36:16.906348988Z" level=warning msg="cleaning up after shim disconnected" id=5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb namespace=k8s.io Mar 19 11:36:16.906532 containerd[1949]: time="2025-03-19T11:36:16.906504844Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:36:17.190714 containerd[1949]: time="2025-03-19T11:36:17.190660597Z" level=info msg="StopPodSandbox for \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\"" Mar 19 11:36:17.190875 containerd[1949]: time="2025-03-19T11:36:17.190808113Z" level=info msg="TearDown network for sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" successfully" Mar 19 11:36:17.190875 containerd[1949]: time="2025-03-19T11:36:17.190832689Z" level=info msg="StopPodSandbox for \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" returns successfully" Mar 19 11:36:17.191582 containerd[1949]: time="2025-03-19T11:36:17.191383453Z" level=info msg="RemovePodSandbox for \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\"" Mar 19 11:36:17.191582 containerd[1949]: time="2025-03-19T11:36:17.191433265Z" level=info msg="Forcibly stopping sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\"" Mar 19 11:36:17.191582 containerd[1949]: time="2025-03-19T11:36:17.191526745Z" level=info msg="TearDown network for sandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" successfully" Mar 19 11:36:17.199325 containerd[1949]: time="2025-03-19T11:36:17.199263673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:36:17.199509 containerd[1949]: time="2025-03-19T11:36:17.199361449Z" level=info msg="RemovePodSandbox \"84baaa08b8996ac1b2943a4fd712b76dc14c1b553f602d1987f2c74539d9d9df\" returns successfully" Mar 19 11:36:17.200729 containerd[1949]: time="2025-03-19T11:36:17.200488489Z" level=info msg="StopPodSandbox for \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\"" Mar 19 11:36:17.200729 containerd[1949]: time="2025-03-19T11:36:17.200624569Z" level=info msg="TearDown network for sandbox \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\" successfully" Mar 19 11:36:17.200729 containerd[1949]: time="2025-03-19T11:36:17.200645377Z" level=info msg="StopPodSandbox for \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\" returns successfully" Mar 19 11:36:17.201495 containerd[1949]: time="2025-03-19T11:36:17.201154933Z" level=info msg="RemovePodSandbox for \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\"" Mar 19 11:36:17.201626 containerd[1949]: time="2025-03-19T11:36:17.201506401Z" level=info msg="Forcibly stopping sandbox \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\"" Mar 19 11:36:17.201626 containerd[1949]: time="2025-03-19T11:36:17.201615265Z" level=info msg="TearDown network for sandbox \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\" successfully" Mar 19 11:36:17.207807 containerd[1949]: time="2025-03-19T11:36:17.207583693Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 19 11:36:17.207807 containerd[1949]: time="2025-03-19T11:36:17.207669889Z" level=info msg="RemovePodSandbox \"4c283ec52afc3a2628e7a28aebd46ca070c86ea799c8ace9bafa8553dfd0593f\" returns successfully" Mar 19 11:36:17.219373 systemd[1]: run-containerd-runc-k8s.io-5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb-runc.D4unr3.mount: Deactivated successfully. Mar 19 11:36:17.219550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f448b4abde390a776a9c279defe3b3229c91dafdaed9c526568532e20aefebb-rootfs.mount: Deactivated successfully. Mar 19 11:36:17.414111 kubelet[3434]: E0319 11:36:17.414040 3434 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 19 11:36:17.710274 containerd[1949]: time="2025-03-19T11:36:17.710022808Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 19 11:36:17.754943 containerd[1949]: time="2025-03-19T11:36:17.752131852Z" level=info msg="CreateContainer within sandbox \"c5937a747bda5a9bc1406f29650ef54a7384b3b0beebc98604c6b3b6643b56d8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5becd5a49880613c80920e546e90b9f6f8192750f355339e99277b11165cd111\"" Mar 19 11:36:17.754943 containerd[1949]: time="2025-03-19T11:36:17.753228208Z" level=info msg="StartContainer for \"5becd5a49880613c80920e546e90b9f6f8192750f355339e99277b11165cd111\"" Mar 19 11:36:17.815517 systemd[1]: Started cri-containerd-5becd5a49880613c80920e546e90b9f6f8192750f355339e99277b11165cd111.scope - libcontainer container 5becd5a49880613c80920e546e90b9f6f8192750f355339e99277b11165cd111. Mar 19 11:36:17.868639 containerd[1949]: time="2025-03-19T11:36:17.868505765Z" level=info msg="StartContainer for \"5becd5a49880613c80920e546e90b9f6f8192750f355339e99277b11165cd111\" returns successfully" Mar 19 11:36:18.158336 kubelet[3434]: E0319 11:36:18.156491 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-clxlv" podUID="610289e2-59cb-4edc-b724-65d69ebc84a4" Mar 19 11:36:18.702372 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 19 11:36:18.757537 kubelet[3434]: I0319 11:36:18.756290 3434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9nzzd" podStartSLOduration=6.756263609 podStartE2EDuration="6.756263609s" podCreationTimestamp="2025-03-19 11:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:36:18.754584965 +0000 UTC m=+121.834338102" watchObservedRunningTime="2025-03-19 11:36:18.756263609 +0000 UTC m=+121.836016626" Mar 19 11:36:20.157594 kubelet[3434]: E0319 11:36:20.156042 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-clxlv" podUID="610289e2-59cb-4edc-b724-65d69ebc84a4" Mar 19 11:36:20.937036 kubelet[3434]: I0319 11:36:20.936966 3434 setters.go:602] "Node became not ready" node="ip-172-31-16-168" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-19T11:36:20Z","lastTransitionTime":"2025-03-19T11:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 19 11:36:22.157029 kubelet[3434]: E0319 11:36:22.156949 3434 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-clxlv" podUID="610289e2-59cb-4edc-b724-65d69ebc84a4" Mar 19 11:36:22.924521 systemd-networkd[1866]: lxc_health: Link UP Mar 19 11:36:22.935023 (udev-worker)[6085]: Network interface NamePolicy= disabled on kernel command line. Mar 19 11:36:22.944605 systemd-networkd[1866]: lxc_health: Gained carrier Mar 19 11:36:24.265454 systemd-networkd[1866]: lxc_health: Gained IPv6LL Mar 19 11:36:24.828594 systemd[1]: run-containerd-runc-k8s.io-5becd5a49880613c80920e546e90b9f6f8192750f355339e99277b11165cd111-runc.6BJTtr.mount: Deactivated successfully. Mar 19 11:36:26.330705 ntpd[1930]: Listen normally on 14 lxc_health [fe80::685d:61ff:fe3b:40ec%14]:123 Mar 19 11:36:26.331849 ntpd[1930]: 19 Mar 11:36:26 ntpd[1930]: Listen normally on 14 lxc_health [fe80::685d:61ff:fe3b:40ec%14]:123 Mar 19 11:36:29.476897 systemd[1]: run-containerd-runc-k8s.io-5becd5a49880613c80920e546e90b9f6f8192750f355339e99277b11165cd111-runc.UjHAQL.mount: Deactivated successfully. Mar 19 11:36:29.589690 sshd[5314]: Connection closed by 139.178.68.195 port 55570 Mar 19 11:36:29.590400 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Mar 19 11:36:29.598714 systemd[1]: sshd@30-172.31.16.168:22-139.178.68.195:55570.service: Deactivated successfully. Mar 19 11:36:29.604860 systemd[1]: session-31.scope: Deactivated successfully. Mar 19 11:36:29.608875 systemd-logind[1936]: Session 31 logged out. Waiting for processes to exit. Mar 19 11:36:29.613330 systemd-logind[1936]: Removed session 31.