Mar 17 18:19:02.949955 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 18:19:02.949992 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:19:02.950015 kernel: efi: EFI v2.70 by EDK II Mar 17 18:19:02.950030 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Mar 17 18:19:02.950044 kernel: ACPI: Early table checksum verification disabled Mar 17 18:19:02.950057 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 18:19:02.950073 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 18:19:02.950088 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 18:19:02.950102 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 18:19:02.950115 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 18:19:02.950133 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 18:19:02.950147 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 18:19:02.950161 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 18:19:02.950176 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 18:19:02.950192 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 18:19:02.950211 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 18:19:02.950225 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 18:19:02.950240 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 18:19:02.950254 kernel: printk: bootconsole [uart0] enabled Mar 17 18:19:02.950280 kernel: NUMA: Failed to initialise from firmware Mar 17 18:19:02.950302 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 18:19:02.950318 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Mar 17 18:19:02.950334 kernel: Zone ranges: Mar 17 18:19:02.950349 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 18:19:02.950363 kernel: DMA32 empty Mar 17 18:19:02.950378 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 18:19:02.950397 kernel: Movable zone start for each node Mar 17 18:19:02.950411 kernel: Early memory node ranges Mar 17 18:19:02.950426 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 18:19:02.950440 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 18:19:02.950455 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 18:19:02.950469 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 18:19:02.950483 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 18:19:02.950498 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 18:19:02.950512 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 18:19:02.950527 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 18:19:02.950541 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 18:19:02.950555 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 18:19:02.950574 kernel: psci: probing for conduit method from ACPI. Mar 17 18:19:02.950588 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 18:19:02.950609 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:19:02.950625 kernel: psci: Trusted OS migration not required Mar 17 18:19:02.950640 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:19:02.950659 kernel: ACPI: SRAT not present Mar 17 18:19:02.950674 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:19:02.950690 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:19:02.950705 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 18:19:02.950720 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:19:02.950735 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:19:02.950771 kernel: CPU features: detected: Spectre-v2 Mar 17 18:19:02.950788 kernel: CPU features: detected: Spectre-v3a Mar 17 18:19:02.950803 kernel: CPU features: detected: Spectre-BHB Mar 17 18:19:02.950818 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:19:02.950834 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:19:02.950853 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 18:19:02.950869 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 18:19:02.950884 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 18:19:02.950899 kernel: Policy zone: Normal Mar 17 18:19:02.950917 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:19:02.950934 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:19:02.950949 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:19:02.950965 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:19:02.950980 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:19:02.950995 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 18:19:02.951015 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Mar 17 18:19:02.951031 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 18:19:02.951046 kernel: trace event string verifier disabled Mar 17 18:19:02.951062 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:19:02.951078 kernel: rcu: RCU event tracing is enabled. Mar 17 18:19:02.951094 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 18:19:02.951110 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:19:02.951125 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:19:02.951140 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:19:02.951156 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 18:19:02.951171 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:19:02.951187 kernel: GICv3: 96 SPIs implemented Mar 17 18:19:02.951206 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:19:02.951221 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:19:02.951236 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:19:02.951251 kernel: GICv3: 16 PPIs implemented Mar 17 18:19:02.951266 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 18:19:02.951281 kernel: ACPI: SRAT not present Mar 17 18:19:02.951296 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 18:19:02.951312 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:19:02.951328 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:19:02.951343 kernel: GICv3: using LPI property table @0x00000004000b0000 Mar 17 18:19:02.951358 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 18:19:02.951377 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Mar 17 18:19:02.951393 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 18:19:02.951408 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 18:19:02.951423 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 18:19:02.951439 kernel: Console: colour dummy device 80x25 Mar 17 18:19:02.951455 kernel: printk: console [tty1] enabled Mar 17 18:19:02.951470 kernel: ACPI: Core revision 20210730 Mar 17 18:19:02.951486 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 18:19:02.951503 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:19:02.951518 kernel: LSM: Security Framework initializing Mar 17 18:19:02.951537 kernel: SELinux: Initializing. Mar 17 18:19:02.951553 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:19:02.951569 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:19:02.951584 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:19:02.951600 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 18:19:02.951616 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 18:19:02.951631 kernel: Remapping and enabling EFI services. Mar 17 18:19:02.951646 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:19:02.951662 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:19:02.951678 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 18:19:02.951697 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Mar 17 18:19:02.951713 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 18:19:02.951728 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 18:19:02.951759 kernel: SMP: Total of 2 processors activated. Mar 17 18:19:02.951779 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:19:02.951795 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 18:19:02.951811 kernel: CPU features: detected: CRC32 instructions Mar 17 18:19:02.951826 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:19:02.951842 kernel: alternatives: patching kernel code Mar 17 18:19:02.951862 kernel: devtmpfs: initialized Mar 17 18:19:02.951878 kernel: KASLR disabled due to lack of seed Mar 17 18:19:02.951904 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:19:02.951924 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 18:19:02.951940 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:19:02.951956 kernel: SMBIOS 3.0.0 present. Mar 17 18:19:02.951972 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 18:19:02.951988 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:19:02.952005 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:19:02.952021 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:19:02.952038 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:19:02.952058 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:19:02.952074 kernel: audit: type=2000 audit(0.247:1): state=initialized audit_enabled=0 res=1 Mar 17 18:19:02.952090 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:19:02.952107 kernel: cpuidle: using governor menu Mar 17 18:19:02.952123 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:19:02.952143 kernel: ASID allocator initialised with 32768 entries Mar 17 18:19:02.952160 kernel: ACPI: bus type PCI registered Mar 17 18:19:02.952176 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:19:02.952192 kernel: Serial: AMBA PL011 UART driver Mar 17 18:19:02.952208 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:19:02.952225 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:19:02.952241 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:19:02.952257 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:19:02.952273 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:19:02.952293 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:19:02.952309 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:19:02.952325 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:19:02.952341 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:19:02.952358 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:19:02.952374 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:19:02.952390 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:19:02.952407 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:19:02.952423 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:19:02.952442 kernel: ACPI: Interpreter enabled Mar 17 18:19:02.952459 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:19:02.952475 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:19:02.952491 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 18:19:02.952774 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:19:02.952976 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:19:02.953162 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:19:02.953347 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 18:19:02.953537 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 18:19:02.953559 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 18:19:02.953576 kernel: acpiphp: Slot [1] registered Mar 17 18:19:02.953593 kernel: acpiphp: Slot [2] registered Mar 17 18:19:02.953609 kernel: acpiphp: Slot [3] registered Mar 17 18:19:02.953625 kernel: acpiphp: Slot [4] registered Mar 17 18:19:02.953641 kernel: acpiphp: Slot [5] registered Mar 17 18:19:02.953657 kernel: acpiphp: Slot [6] registered Mar 17 18:19:02.953673 kernel: acpiphp: Slot [7] registered Mar 17 18:19:02.953694 kernel: acpiphp: Slot [8] registered Mar 17 18:19:02.953710 kernel: acpiphp: Slot [9] registered Mar 17 18:19:02.953726 kernel: acpiphp: Slot [10] registered Mar 17 18:19:02.953756 kernel: acpiphp: Slot [11] registered Mar 17 18:19:02.953777 kernel: acpiphp: Slot [12] registered Mar 17 18:19:02.953794 kernel: acpiphp: Slot [13] registered Mar 17 18:19:02.953810 kernel: acpiphp: Slot [14] registered Mar 17 18:19:02.953827 kernel: acpiphp: Slot [15] registered Mar 17 18:19:02.953843 kernel: acpiphp: Slot [16] registered Mar 17 18:19:02.953863 kernel: acpiphp: Slot [17] registered Mar 17 18:19:02.953880 kernel: acpiphp: Slot [18] registered Mar 17 18:19:02.953896 kernel: acpiphp: Slot [19] registered Mar 17 18:19:02.953912 kernel: acpiphp: Slot [20] registered Mar 17 18:19:02.953928 kernel: acpiphp: Slot [21] registered Mar 17 18:19:02.953944 kernel: acpiphp: Slot [22] registered Mar 17 18:19:02.953960 kernel: acpiphp: Slot [23] registered Mar 17 18:19:02.953976 kernel: acpiphp: Slot [24] registered Mar 17 18:19:02.953992 kernel: acpiphp: Slot [25] registered Mar 17 18:19:02.954008 kernel: acpiphp: Slot [26] registered Mar 17 18:19:02.954028 kernel: acpiphp: Slot [27] registered Mar 17 18:19:02.954044 kernel: acpiphp: Slot [28] registered Mar 17 18:19:02.954060 kernel: acpiphp: Slot [29] registered Mar 17 18:19:02.954076 kernel: acpiphp: Slot [30] registered Mar 17 18:19:02.954092 kernel: acpiphp: Slot [31] registered Mar 17 18:19:02.954108 kernel: PCI host bridge to bus 0000:00 Mar 17 18:19:02.954313 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 18:19:02.954490 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:19:02.954664 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 18:19:02.965932 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 18:19:02.966203 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 18:19:02.966479 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 18:19:02.966683 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 18:19:02.966927 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 18:19:02.967128 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 18:19:02.967320 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:19:02.967523 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 18:19:02.967714 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 18:19:02.974613 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 18:19:02.974869 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 18:19:02.976842 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 18:19:02.977090 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 18:19:02.977287 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 18:19:02.977486 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 18:19:02.977680 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 18:19:02.977979 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 18:19:02.978159 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 18:19:02.978353 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:19:02.978530 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 18:19:02.978553 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:19:02.978571 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:19:02.978587 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:19:02.978604 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:19:02.978621 kernel: iommu: Default domain type: Translated Mar 17 18:19:02.978637 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:19:02.978653 kernel: vgaarb: loaded Mar 17 18:19:02.978670 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:19:02.978691 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:19:02.978708 kernel: PTP clock support registered Mar 17 18:19:02.978724 kernel: Registered efivars operations Mar 17 18:19:02.978754 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:19:02.978776 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:19:02.978794 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:19:02.978810 kernel: pnp: PnP ACPI init Mar 17 18:19:02.979007 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 18:19:02.979036 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:19:02.979053 kernel: NET: Registered PF_INET protocol family Mar 17 18:19:02.979070 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:19:02.979087 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:19:02.979107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:19:02.979124 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:19:02.979141 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:19:02.979158 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:19:02.979174 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:19:02.979195 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:19:02.979212 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:19:02.979228 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:19:02.979245 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 18:19:02.979261 kernel: kvm [1]: HYP mode not available Mar 17 18:19:02.979277 kernel: Initialise system trusted keyrings Mar 17 18:19:02.979294 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:19:02.979310 kernel: Key type asymmetric registered Mar 17 18:19:02.979327 kernel: Asymmetric key parser 'x509' registered Mar 17 18:19:02.979347 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:19:02.979364 kernel: io scheduler mq-deadline registered Mar 17 18:19:02.979380 kernel: io scheduler kyber registered Mar 17 18:19:02.979396 kernel: io scheduler bfq registered Mar 17 18:19:02.979592 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 18:19:02.979617 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:19:02.979633 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:19:02.979650 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 18:19:02.979671 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 18:19:02.979688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:19:02.979705 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 18:19:02.979944 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 18:19:02.979969 kernel: printk: console [ttyS0] disabled Mar 17 18:19:02.979986 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 18:19:02.980003 kernel: printk: console [ttyS0] enabled Mar 17 18:19:02.980019 kernel: printk: bootconsole [uart0] disabled Mar 17 18:19:02.980036 kernel: thunder_xcv, ver 1.0 Mar 17 18:19:02.980052 kernel: thunder_bgx, ver 1.0 Mar 17 18:19:02.980073 kernel: nicpf, ver 1.0 Mar 17 18:19:02.980089 kernel: nicvf, ver 1.0 Mar 17 18:19:02.980296 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:19:02.980474 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:19:02 UTC (1742235542) Mar 17 18:19:02.980497 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:19:02.980514 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:19:02.980530 kernel: Segment Routing with IPv6 Mar 17 18:19:02.980546 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:19:02.980567 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:19:02.980583 kernel: Key type dns_resolver registered Mar 17 18:19:02.980600 kernel: registered taskstats version 1 Mar 17 18:19:02.980616 kernel: Loading compiled-in X.509 certificates Mar 17 18:19:02.980643 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:19:02.980676 kernel: Key type .fscrypt registered Mar 17 18:19:02.980693 kernel: Key type fscrypt-provisioning registered Mar 17 18:19:02.980710 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:19:02.980726 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:19:02.980825 kernel: ima: No architecture policies found Mar 17 18:19:02.980845 kernel: clk: Disabling unused clocks Mar 17 18:19:02.980879 kernel: Freeing unused kernel memory: 36416K Mar 17 18:19:02.980896 kernel: Run /init as init process Mar 17 18:19:02.980913 kernel: with arguments: Mar 17 18:19:02.980929 kernel: /init Mar 17 18:19:02.980945 kernel: with environment: Mar 17 18:19:02.980961 kernel: HOME=/ Mar 17 18:19:02.980977 kernel: TERM=linux Mar 17 18:19:02.980998 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:19:02.981020 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:19:02.981042 systemd[1]: Detected virtualization amazon. Mar 17 18:19:02.981061 systemd[1]: Detected architecture arm64. Mar 17 18:19:02.981078 systemd[1]: Running in initrd. Mar 17 18:19:02.981096 systemd[1]: No hostname configured, using default hostname. Mar 17 18:19:02.981113 systemd[1]: Hostname set to . Mar 17 18:19:02.981136 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:19:02.981154 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:19:02.981171 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:19:02.981189 systemd[1]: Reached target cryptsetup.target. Mar 17 18:19:02.981206 systemd[1]: Reached target paths.target. Mar 17 18:19:02.981224 systemd[1]: Reached target slices.target. Mar 17 18:19:02.981242 systemd[1]: Reached target swap.target. Mar 17 18:19:02.981259 systemd[1]: Reached target timers.target. Mar 17 18:19:02.981282 systemd[1]: Listening on iscsid.socket. Mar 17 18:19:02.981300 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:19:02.981317 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:19:02.981335 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:19:02.981353 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:19:02.981371 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:19:02.981389 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:19:02.981407 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:19:02.981429 systemd[1]: Reached target sockets.target. Mar 17 18:19:02.981447 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:19:02.981465 systemd[1]: Finished network-cleanup.service. Mar 17 18:19:02.981483 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:19:02.981501 systemd[1]: Starting systemd-journald.service... Mar 17 18:19:02.981519 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:19:02.981537 systemd[1]: Starting systemd-resolved.service... Mar 17 18:19:02.981555 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:19:02.981572 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:19:02.981595 kernel: audit: type=1130 audit(1742235542.936:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:02.981614 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:19:02.981632 kernel: audit: type=1130 audit(1742235542.946:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:02.981649 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:19:02.981667 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:19:02.981685 kernel: audit: type=1130 audit(1742235542.959:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:02.981703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:19:02.981725 systemd-journald[309]: Journal started Mar 17 18:19:02.981904 systemd-journald[309]: Runtime Journal (/run/log/journal/ec209172b158edac6ebbcaa2e9641b26) is 8.0M, max 75.4M, 67.4M free. Mar 17 18:19:02.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:02.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:02.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:02.958338 systemd-modules-load[310]: Inserted module 'overlay' Mar 17 18:19:02.993931 systemd[1]: Started systemd-journald.service. Mar 17 18:19:03.015821 kernel: audit: type=1130 audit(1742235543.006:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.024195 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:19:03.026981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:19:03.047919 kernel: audit: type=1130 audit(1742235543.024:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.062405 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:19:03.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.039165 systemd-resolved[311]: Positive Trust Anchors: Mar 17 18:19:03.073596 kernel: Bridge firewalling registered Mar 17 18:19:03.073633 kernel: audit: type=1130 audit(1742235543.047:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.039180 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:19:03.039237 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:19:03.051133 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:19:03.098770 kernel: SCSI subsystem initialized Mar 17 18:19:03.064972 systemd-modules-load[310]: Inserted module 'br_netfilter' Mar 17 18:19:03.110386 dracut-cmdline[326]: dracut-dracut-053 Mar 17 18:19:03.120762 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:19:03.120827 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:19:03.120948 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:19:03.142422 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:19:03.143499 systemd-modules-load[310]: Inserted module 'dm_multipath' Mar 17 18:19:03.145379 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:19:03.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.158711 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:19:03.168009 kernel: audit: type=1130 audit(1742235543.145:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.188519 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:19:03.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.198771 kernel: audit: type=1130 audit(1742235543.188:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.287780 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:19:03.308778 kernel: iscsi: registered transport (tcp) Mar 17 18:19:03.335584 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:19:03.335665 kernel: QLogic iSCSI HBA Driver Mar 17 18:19:03.502429 systemd-resolved[311]: Defaulting to hostname 'linux'. Mar 17 18:19:03.504575 kernel: random: crng init done Mar 17 18:19:03.505430 systemd[1]: Started systemd-resolved.service. Mar 17 18:19:03.516661 kernel: audit: type=1130 audit(1742235543.505:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.507204 systemd[1]: Reached target nss-lookup.target. Mar 17 18:19:03.534738 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:19:03.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:03.539198 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:19:03.604785 kernel: raid6: neonx8 gen() 6413 MB/s Mar 17 18:19:03.622774 kernel: raid6: neonx8 xor() 4641 MB/s Mar 17 18:19:03.640774 kernel: raid6: neonx4 gen() 6559 MB/s Mar 17 18:19:03.658773 kernel: raid6: neonx4 xor() 4818 MB/s Mar 17 18:19:03.676776 kernel: raid6: neonx2 gen() 5752 MB/s Mar 17 18:19:03.694773 kernel: raid6: neonx2 xor() 4406 MB/s Mar 17 18:19:03.712773 kernel: raid6: neonx1 gen() 4492 MB/s Mar 17 18:19:03.730775 kernel: raid6: neonx1 xor() 3602 MB/s Mar 17 18:19:03.748773 kernel: raid6: int64x8 gen() 3443 MB/s Mar 17 18:19:03.766773 kernel: raid6: int64x8 xor() 2063 MB/s Mar 17 18:19:03.784774 kernel: raid6: int64x4 gen() 3842 MB/s Mar 17 18:19:03.802773 kernel: raid6: int64x4 xor() 2169 MB/s Mar 17 18:19:03.820774 kernel: raid6: int64x2 gen() 3610 MB/s Mar 17 18:19:03.838773 kernel: raid6: int64x2 xor() 1927 MB/s Mar 17 18:19:03.856774 kernel: raid6: int64x1 gen() 2764 MB/s Mar 17 18:19:03.875911 kernel: raid6: int64x1 xor() 1399 MB/s Mar 17 18:19:03.875947 kernel: raid6: using algorithm neonx4 gen() 6559 MB/s Mar 17 18:19:03.875971 kernel: raid6: .... xor() 4818 MB/s, rmw enabled Mar 17 18:19:03.877528 kernel: raid6: using neon recovery algorithm Mar 17 18:19:03.896986 kernel: xor: measuring software checksum speed Mar 17 18:19:03.897050 kernel: 8regs : 9297 MB/sec Mar 17 18:19:03.898709 kernel: 32regs : 11106 MB/sec Mar 17 18:19:03.902163 kernel: arm64_neon : 8997 MB/sec Mar 17 18:19:03.902195 kernel: xor: using function: 32regs (11106 MB/sec) Mar 17 18:19:03.991787 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:19:04.008348 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:19:04.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:04.010000 audit: BPF prog-id=7 op=LOAD Mar 17 18:19:04.010000 audit: BPF prog-id=8 op=LOAD Mar 17 18:19:04.012678 systemd[1]: Starting systemd-udevd.service... Mar 17 18:19:04.041203 systemd-udevd[508]: Using default interface naming scheme 'v252'. Mar 17 18:19:04.051862 systemd[1]: Started systemd-udevd.service. Mar 17 18:19:04.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:04.056055 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:19:04.084382 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation Mar 17 18:19:04.145133 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:19:04.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:04.149577 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:19:04.263239 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:19:04.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:04.385767 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:19:04.385838 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 18:19:04.407426 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 18:19:04.407462 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 18:19:04.407726 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 18:19:04.407963 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 18:19:04.408162 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 18:19:04.408353 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:3a:fc:ea:20:99 Mar 17 18:19:04.413229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:19:04.413279 kernel: GPT:9289727 != 16777215 Mar 17 18:19:04.413303 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:19:04.415200 kernel: GPT:9289727 != 16777215 Mar 17 18:19:04.416357 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:19:04.419373 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:04.423258 (udev-worker)[568]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:19:04.504795 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (574) Mar 17 18:19:04.527148 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:19:04.601833 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:19:04.613410 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:19:04.637934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:19:04.651215 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:19:04.656010 systemd[1]: Starting disk-uuid.service... Mar 17 18:19:04.676873 disk-uuid[675]: Primary Header is updated. Mar 17 18:19:04.676873 disk-uuid[675]: Secondary Entries is updated. Mar 17 18:19:04.676873 disk-uuid[675]: Secondary Header is updated. Mar 17 18:19:04.687938 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:04.693778 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:05.705100 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 18:19:05.705226 disk-uuid[676]: The operation has completed successfully. Mar 17 18:19:05.869280 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:19:05.869864 systemd[1]: Finished disk-uuid.service. Mar 17 18:19:05.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:05.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:05.895509 systemd[1]: Starting verity-setup.service... Mar 17 18:19:05.929239 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:19:06.035204 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:19:06.040125 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:19:06.043518 systemd[1]: Finished verity-setup.service. Mar 17 18:19:06.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.133775 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:19:06.134370 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:19:06.136319 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:19:06.142565 systemd[1]: Starting ignition-setup.service... Mar 17 18:19:06.145446 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:19:06.179981 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:19:06.180058 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:19:06.182656 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:19:06.195790 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:19:06.214555 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:19:06.231131 systemd[1]: Finished ignition-setup.service. Mar 17 18:19:06.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.233418 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:19:06.293641 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:19:06.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.295000 audit: BPF prog-id=9 op=LOAD Mar 17 18:19:06.298413 systemd[1]: Starting systemd-networkd.service... Mar 17 18:19:06.345899 systemd-networkd[1199]: lo: Link UP Mar 17 18:19:06.345921 systemd-networkd[1199]: lo: Gained carrier Mar 17 18:19:06.349454 systemd-networkd[1199]: Enumeration completed Mar 17 18:19:06.349945 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:19:06.351204 systemd[1]: Started systemd-networkd.service. Mar 17 18:19:06.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.357121 systemd[1]: Reached target network.target. Mar 17 18:19:06.362654 systemd-networkd[1199]: eth0: Link UP Mar 17 18:19:06.362670 systemd-networkd[1199]: eth0: Gained carrier Mar 17 18:19:06.362719 systemd[1]: Starting iscsiuio.service... Mar 17 18:19:06.377651 systemd[1]: Started iscsiuio.service. Mar 17 18:19:06.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.382069 systemd[1]: Starting iscsid.service... Mar 17 18:19:06.389004 iscsid[1204]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:19:06.389004 iscsid[1204]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:19:06.389004 iscsid[1204]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:19:06.389004 iscsid[1204]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:19:06.389004 iscsid[1204]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:19:06.389004 iscsid[1204]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:19:06.389004 iscsid[1204]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:19:06.395640 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.23.13/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 18:19:06.402228 systemd[1]: Started iscsid.service. Mar 17 18:19:06.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.417334 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:19:06.438319 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:19:06.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.442167 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:19:06.443885 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:19:06.445542 systemd[1]: Reached target remote-fs.target. Mar 17 18:19:06.453001 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:19:06.471441 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:19:06.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.920779 ignition[1148]: Ignition 2.14.0 Mar 17 18:19:06.920805 ignition[1148]: Stage: fetch-offline Mar 17 18:19:06.921257 ignition[1148]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:06.922043 ignition[1148]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:06.947293 ignition[1148]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:06.949769 ignition[1148]: Ignition finished successfully Mar 17 18:19:06.951681 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:19:06.958358 kernel: kauditd_printk_skb: 18 callbacks suppressed Mar 17 18:19:06.958394 kernel: audit: type=1130 audit(1742235546.953:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:06.965306 systemd[1]: Starting ignition-fetch.service... Mar 17 18:19:06.978650 ignition[1223]: Ignition 2.14.0 Mar 17 18:19:06.978680 ignition[1223]: Stage: fetch Mar 17 18:19:06.979038 ignition[1223]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:06.979096 ignition[1223]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:06.992494 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:06.994907 ignition[1223]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:07.006071 ignition[1223]: INFO : PUT result: OK Mar 17 18:19:07.009566 ignition[1223]: DEBUG : parsed url from cmdline: "" Mar 17 18:19:07.011248 ignition[1223]: INFO : no config URL provided Mar 17 18:19:07.012802 ignition[1223]: INFO : reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:19:07.014943 ignition[1223]: INFO : no config at "/usr/lib/ignition/user.ign" Mar 17 18:19:07.016824 ignition[1223]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:07.019683 ignition[1223]: INFO : PUT result: OK Mar 17 18:19:07.021240 ignition[1223]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 18:19:07.023983 ignition[1223]: INFO : GET result: OK Mar 17 18:19:07.038580 ignition[1223]: DEBUG : parsing config with SHA512: 58c3a6074b9351771a61140e613269999482e8903f67a851459733c5395ab6d0b773ade4599fd5a3597799b1d3a16c872e6bfdeb88e8fe36cc139fc3bad4215a Mar 17 18:19:07.040214 unknown[1223]: fetched base config from "system" Mar 17 18:19:07.041622 ignition[1223]: fetch: fetch complete Mar 17 18:19:07.040231 unknown[1223]: fetched base config from "system" Mar 17 18:19:07.041635 ignition[1223]: fetch: fetch passed Mar 17 18:19:07.040246 unknown[1223]: fetched user config from "aws" Mar 17 18:19:07.041723 ignition[1223]: Ignition finished successfully Mar 17 18:19:07.053976 systemd[1]: Finished ignition-fetch.service. Mar 17 18:19:07.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.058194 systemd[1]: Starting ignition-kargs.service... Mar 17 18:19:07.066806 kernel: audit: type=1130 audit(1742235547.055:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.079484 ignition[1229]: Ignition 2.14.0 Mar 17 18:19:07.079514 ignition[1229]: Stage: kargs Mar 17 18:19:07.079820 ignition[1229]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:07.079872 ignition[1229]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:07.092584 ignition[1229]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:07.094683 ignition[1229]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:07.097072 ignition[1229]: INFO : PUT result: OK Mar 17 18:19:07.101963 ignition[1229]: kargs: kargs passed Mar 17 18:19:07.103364 ignition[1229]: Ignition finished successfully Mar 17 18:19:07.106253 systemd[1]: Finished ignition-kargs.service. Mar 17 18:19:07.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.110527 systemd[1]: Starting ignition-disks.service... Mar 17 18:19:07.118637 kernel: audit: type=1130 audit(1742235547.107:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.125810 ignition[1235]: Ignition 2.14.0 Mar 17 18:19:07.125838 ignition[1235]: Stage: disks Mar 17 18:19:07.126136 ignition[1235]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:07.126195 ignition[1235]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:07.141122 ignition[1235]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:07.143866 ignition[1235]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:07.148606 ignition[1235]: INFO : PUT result: OK Mar 17 18:19:07.153599 ignition[1235]: disks: disks passed Mar 17 18:19:07.153697 ignition[1235]: Ignition finished successfully Mar 17 18:19:07.156841 systemd[1]: Finished ignition-disks.service. Mar 17 18:19:07.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.160136 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:19:07.187894 kernel: audit: type=1130 audit(1742235547.158:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.168248 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:19:07.169803 systemd[1]: Reached target local-fs.target. Mar 17 18:19:07.171265 systemd[1]: Reached target sysinit.target. Mar 17 18:19:07.172709 systemd[1]: Reached target basic.target. Mar 17 18:19:07.175527 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:19:07.220379 systemd-fsck[1243]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:19:07.228683 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:19:07.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.232100 systemd[1]: Mounting sysroot.mount... Mar 17 18:19:07.242929 kernel: audit: type=1130 audit(1742235547.229:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.262796 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:19:07.264481 systemd[1]: Mounted sysroot.mount. Mar 17 18:19:07.266946 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:19:07.284869 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:19:07.291035 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:19:07.291113 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:19:07.291171 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:19:07.294927 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:19:07.317441 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:19:07.320151 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:19:07.338358 initrd-setup-root[1265]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:19:07.352791 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1260) Mar 17 18:19:07.357968 initrd-setup-root[1273]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:19:07.360993 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:19:07.361027 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:19:07.361050 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:19:07.369361 initrd-setup-root[1297]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:19:07.377651 initrd-setup-root[1305]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:19:07.396779 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:19:07.407357 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:19:07.569615 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:19:07.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.573855 systemd[1]: Starting ignition-mount.service... Mar 17 18:19:07.582924 kernel: audit: type=1130 audit(1742235547.571:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.583561 systemd[1]: Starting sysroot-boot.service... Mar 17 18:19:07.593727 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 18:19:07.595807 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 18:19:07.642183 systemd[1]: Finished sysroot-boot.service. Mar 17 18:19:07.651637 kernel: audit: type=1130 audit(1742235547.640:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.669429 ignition[1328]: INFO : Ignition 2.14.0 Mar 17 18:19:07.669429 ignition[1328]: INFO : Stage: mount Mar 17 18:19:07.672866 ignition[1328]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:07.672866 ignition[1328]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:07.691765 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:07.694803 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:07.697148 ignition[1328]: INFO : PUT result: OK Mar 17 18:19:07.702868 ignition[1328]: INFO : mount: mount passed Mar 17 18:19:07.704573 ignition[1328]: INFO : Ignition finished successfully Mar 17 18:19:07.707491 systemd[1]: Finished ignition-mount.service. Mar 17 18:19:07.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.711548 systemd[1]: Starting ignition-files.service... Mar 17 18:19:07.719848 kernel: audit: type=1130 audit(1742235547.708:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:07.728091 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:19:07.753046 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1335) Mar 17 18:19:07.757908 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:19:07.757942 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 18:19:07.757966 kernel: BTRFS info (device nvme0n1p6): has skinny extents Mar 17 18:19:07.778778 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 18:19:07.784218 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:19:07.803483 ignition[1354]: INFO : Ignition 2.14.0 Mar 17 18:19:07.806823 ignition[1354]: INFO : Stage: files Mar 17 18:19:07.806823 ignition[1354]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:07.806823 ignition[1354]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:07.822303 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:07.824657 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:07.827768 ignition[1354]: INFO : PUT result: OK Mar 17 18:19:07.836935 ignition[1354]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:19:07.842236 ignition[1354]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:19:07.842236 ignition[1354]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:19:07.887251 ignition[1354]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:19:07.889812 ignition[1354]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:19:07.893917 unknown[1354]: wrote ssh authorized keys file for user: core Mar 17 18:19:07.896010 ignition[1354]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:19:07.899823 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:19:07.903024 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:19:07.903024 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:19:07.909478 ignition[1354]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:19:08.050691 ignition[1354]: INFO : GET result: OK Mar 17 18:19:08.198097 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:19:08.201688 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:19:08.204873 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:19:08.208157 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:19:08.212637 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:19:08.217094 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Mar 17 18:19:08.220387 ignition[1354]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:19:08.224087 systemd-networkd[1199]: eth0: Gained IPv6LL Mar 17 18:19:08.233970 ignition[1354]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem431986052" Mar 17 18:19:08.236595 ignition[1354]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem431986052": device or resource busy Mar 17 18:19:08.236595 ignition[1354]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem431986052", trying btrfs: device or resource busy Mar 17 18:19:08.236595 ignition[1354]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem431986052" Mar 17 18:19:08.236595 ignition[1354]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem431986052" Mar 17 18:19:08.248487 ignition[1354]: INFO : op(3): [started] unmounting "/mnt/oem431986052" Mar 17 18:19:08.248487 ignition[1354]: INFO : op(3): [finished] unmounting "/mnt/oem431986052" Mar 17 18:19:08.248487 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Mar 17 18:19:08.255512 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:19:08.255512 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:19:08.255512 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:19:08.255512 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:19:08.255512 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:19:08.255512 ignition[1354]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:19:08.732283 ignition[1354]: INFO : GET result: OK Mar 17 18:19:08.875645 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:19:08.879006 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:19:08.879006 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:19:08.879006 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:19:08.879006 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:19:08.879006 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:19:08.879006 ignition[1354]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:19:08.907251 ignition[1354]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3787630252" Mar 17 18:19:08.907251 ignition[1354]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3787630252": device or resource busy Mar 17 18:19:08.907251 ignition[1354]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3787630252", trying btrfs: device or resource busy Mar 17 18:19:08.907251 ignition[1354]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3787630252" Mar 17 18:19:08.920160 ignition[1354]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3787630252" Mar 17 18:19:08.920160 ignition[1354]: INFO : op(6): [started] unmounting "/mnt/oem3787630252" Mar 17 18:19:08.920160 ignition[1354]: INFO : op(6): [finished] unmounting "/mnt/oem3787630252" Mar 17 18:19:08.920160 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Mar 17 18:19:08.920160 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:19:08.920160 ignition[1354]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 18:19:09.190034 ignition[1354]: INFO : GET result: OK Mar 17 18:19:09.579161 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:19:09.579161 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Mar 17 18:19:09.587412 ignition[1354]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:19:09.598419 ignition[1354]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058856207" Mar 17 18:19:09.602659 ignition[1354]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058856207": device or resource busy Mar 17 18:19:09.606803 ignition[1354]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1058856207", trying btrfs: device or resource busy Mar 17 18:19:09.610173 ignition[1354]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058856207" Mar 17 18:19:09.614824 ignition[1354]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1058856207" Mar 17 18:19:09.618339 ignition[1354]: INFO : op(9): [started] unmounting "/mnt/oem1058856207" Mar 17 18:19:09.618339 ignition[1354]: INFO : op(9): [finished] unmounting "/mnt/oem1058856207" Mar 17 18:19:09.618339 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Mar 17 18:19:09.618339 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Mar 17 18:19:09.618339 ignition[1354]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Mar 17 18:19:09.637600 systemd[1]: mnt-oem1058856207.mount: Deactivated successfully. Mar 17 18:19:09.662631 ignition[1354]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2166759534" Mar 17 18:19:09.668383 ignition[1354]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2166759534": device or resource busy Mar 17 18:19:09.668383 ignition[1354]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2166759534", trying btrfs: device or resource busy Mar 17 18:19:09.668383 ignition[1354]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2166759534" Mar 17 18:19:09.679888 ignition[1354]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2166759534" Mar 17 18:19:09.679888 ignition[1354]: INFO : op(c): [started] unmounting "/mnt/oem2166759534" Mar 17 18:19:09.679888 ignition[1354]: INFO : op(c): [finished] unmounting "/mnt/oem2166759534" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(14): [started] processing unit "nvidia.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(14): [finished] processing unit "nvidia.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(15): [started] processing unit "containerd.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(15): [finished] processing unit "containerd.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:19:09.679888 ignition[1354]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Mar 17 18:19:09.735696 ignition[1354]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Mar 17 18:19:09.735696 ignition[1354]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Mar 17 18:19:09.735696 ignition[1354]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Mar 17 18:19:09.735696 ignition[1354]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Mar 17 18:19:09.735696 ignition[1354]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:19:09.735696 ignition[1354]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:19:09.735696 ignition[1354]: INFO : files: op(1c): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:19:09.735696 ignition[1354]: INFO : files: op(1c): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 18:19:09.765786 ignition[1354]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:19:09.765786 ignition[1354]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:19:09.765786 ignition[1354]: INFO : files: files passed Mar 17 18:19:09.765786 ignition[1354]: INFO : Ignition finished successfully Mar 17 18:19:09.789881 kernel: audit: type=1130 audit(1742235549.778:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.770199 systemd[1]: Finished ignition-files.service. Mar 17 18:19:09.799828 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:19:09.801692 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:19:09.809434 systemd[1]: Starting ignition-quench.service... Mar 17 18:19:09.817041 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:19:09.818025 systemd[1]: Finished ignition-quench.service. Mar 17 18:19:09.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.828778 kernel: audit: type=1130 audit(1742235549.819:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.834627 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:19:09.838936 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:19:09.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.839696 systemd[1]: Reached target ignition-complete.target. Mar 17 18:19:09.841715 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:19:09.876894 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:19:09.877270 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:19:09.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.882814 systemd[1]: Reached target initrd-fs.target. Mar 17 18:19:09.885902 systemd[1]: Reached target initrd.target. Mar 17 18:19:09.888845 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:19:09.891918 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:19:09.916781 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:19:09.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.920008 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:19:09.941345 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:19:09.944408 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:19:09.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.945942 systemd[1]: Stopped target timers.target. Mar 17 18:19:09.978927 iscsid[1204]: iscsid shutting down. Mar 17 18:19:09.946514 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:19:09.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.946794 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:19:10.019257 ignition[1392]: INFO : Ignition 2.14.0 Mar 17 18:19:10.019257 ignition[1392]: INFO : Stage: umount Mar 17 18:19:10.019257 ignition[1392]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 18:19:10.019257 ignition[1392]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Mar 17 18:19:10.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.947329 systemd[1]: Stopped target initrd.target. Mar 17 18:19:10.040545 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 18:19:10.040545 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 18:19:10.040545 ignition[1392]: INFO : PUT result: OK Mar 17 18:19:09.947714 systemd[1]: Stopped target basic.target. Mar 17 18:19:09.948299 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:19:09.948605 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:19:09.948909 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:19:10.073661 ignition[1392]: INFO : umount: umount passed Mar 17 18:19:10.073661 ignition[1392]: INFO : Ignition finished successfully Mar 17 18:19:10.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.949194 systemd[1]: Stopped target remote-fs.target. Mar 17 18:19:10.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.949486 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:19:10.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.949818 systemd[1]: Stopped target sysinit.target. Mar 17 18:19:09.950095 systemd[1]: Stopped target local-fs.target. Mar 17 18:19:09.950402 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:19:09.950685 systemd[1]: Stopped target swap.target. Mar 17 18:19:10.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.951222 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:19:09.951416 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:19:09.952051 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:19:09.952415 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:19:09.952599 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:19:09.953477 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:19:10.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.953676 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:19:09.954324 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:19:09.954505 systemd[1]: Stopped ignition-files.service. Mar 17 18:19:10.136000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:19:09.956617 systemd[1]: Stopping ignition-mount.service... Mar 17 18:19:10.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.963902 systemd[1]: Stopping iscsid.service... Mar 17 18:19:09.966158 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:19:10.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.966837 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:19:09.967117 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:19:09.974982 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:19:09.975611 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:19:09.994909 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:19:10.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:09.995140 systemd[1]: Stopped iscsid.service. Mar 17 18:19:09.998397 systemd[1]: Stopping iscsiuio.service... Mar 17 18:19:10.018367 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:19:10.020982 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:19:10.021288 systemd[1]: Stopped iscsiuio.service. Mar 17 18:19:10.029532 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:19:10.029718 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:19:10.068431 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:19:10.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.068627 systemd[1]: Stopped ignition-mount.service. Mar 17 18:19:10.072211 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:19:10.072319 systemd[1]: Stopped ignition-disks.service. Mar 17 18:19:10.078978 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:19:10.079086 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:19:10.081346 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 18:19:10.081430 systemd[1]: Stopped ignition-fetch.service. Mar 17 18:19:10.082969 systemd[1]: Stopped target network.target. Mar 17 18:19:10.084459 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:19:10.084546 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:19:10.086297 systemd[1]: Stopped target paths.target. Mar 17 18:19:10.087656 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:19:10.090932 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:19:10.096881 systemd[1]: Stopped target slices.target. Mar 17 18:19:10.099242 systemd[1]: Stopped target sockets.target. Mar 17 18:19:10.100817 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:19:10.100901 systemd[1]: Closed iscsid.socket. Mar 17 18:19:10.102301 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:19:10.102378 systemd[1]: Closed iscsiuio.socket. Mar 17 18:19:10.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.103698 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:19:10.103826 systemd[1]: Stopped ignition-setup.service. Mar 17 18:19:10.105850 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:19:10.107900 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:19:10.111988 systemd-networkd[1199]: eth0: DHCPv6 lease lost Mar 17 18:19:10.247000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:19:10.119245 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:19:10.119506 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:19:10.124510 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:19:10.124838 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:19:10.127697 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:19:10.127798 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:19:10.135152 systemd[1]: Stopping network-cleanup.service... Mar 17 18:19:10.139794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:19:10.140165 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:19:10.144945 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:19:10.145787 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:19:10.158624 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:19:10.158805 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:19:10.164547 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:19:10.172983 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:19:10.181403 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:19:10.181650 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:19:10.202853 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:19:10.204724 systemd[1]: Stopped network-cleanup.service. Mar 17 18:19:10.228066 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:19:10.235160 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:19:10.254466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:19:10.259204 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:19:10.285677 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:19:10.285822 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:19:10.290965 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:19:10.291080 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:19:10.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.295596 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:19:10.295695 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:19:10.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.300300 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:19:10.300411 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:19:10.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.305436 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:19:10.305530 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:19:10.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.312003 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:19:10.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.327914 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:19:10.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:10.328066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:19:10.331692 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:19:10.331833 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:19:10.335345 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:19:10.335428 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:19:10.337715 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:19:10.337957 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:19:10.340076 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:19:10.368000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:19:10.368000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:19:10.368000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:19:10.372000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:19:10.372000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:19:10.344605 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:19:10.368733 systemd[1]: Switching root. Mar 17 18:19:10.397963 systemd-journald[309]: Journal stopped Mar 17 18:19:16.590514 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Mar 17 18:19:16.590629 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:19:16.590682 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:19:16.590725 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:19:16.590775 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:19:16.590828 kernel: SELinux: policy capability open_perms=1 Mar 17 18:19:16.590865 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:19:16.590897 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:19:16.590927 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:19:16.590958 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:19:16.593062 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:19:16.593121 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:19:16.593169 systemd[1]: Successfully loaded SELinux policy in 126.009ms. Mar 17 18:19:16.593222 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.379ms. Mar 17 18:19:16.593260 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:19:16.593301 systemd[1]: Detected virtualization amazon. Mar 17 18:19:16.593332 systemd[1]: Detected architecture arm64. Mar 17 18:19:16.596113 systemd[1]: Detected first boot. Mar 17 18:19:16.596150 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:19:16.596189 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:19:16.596222 kernel: kauditd_printk_skb: 48 callbacks suppressed Mar 17 18:19:16.596259 kernel: audit: type=1400 audit(1742235552.054:87): avc: denied { associate } for pid=1442 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:19:16.596293 kernel: audit: type=1300 audit(1742235552.054:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=4000147682 a1=40000c8ae0 a2=40000cea00 a3=32 items=0 ppid=1425 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:19:16.596326 kernel: audit: type=1327 audit(1742235552.054:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:19:16.596355 kernel: audit: type=1400 audit(1742235552.058:88): avc: denied { associate } for pid=1442 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:19:16.600473 kernel: audit: type=1300 audit(1742235552.058:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147759 a2=1ed a3=0 items=2 ppid=1425 pid=1442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:19:16.600539 kernel: audit: type=1307 audit(1742235552.058:88): cwd="/" Mar 17 18:19:16.602367 kernel: audit: type=1302 audit(1742235552.058:88): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:19:16.602414 kernel: audit: type=1302 audit(1742235552.058:88): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:19:16.602448 kernel: audit: type=1327 audit(1742235552.058:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:19:16.602481 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:19:16.602558 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:19:16.602610 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:19:16.602649 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:19:16.602681 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:19:16.602711 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Mar 17 18:19:16.603001 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:19:16.603049 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:19:16.603080 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 18:19:16.603116 systemd[1]: Created slice system-getty.slice. Mar 17 18:19:16.603175 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:19:16.603226 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:19:16.603260 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:19:16.603291 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:19:16.603322 systemd[1]: Created slice user.slice. Mar 17 18:19:16.603356 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:19:16.603389 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:19:16.603425 systemd[1]: Set up automount boot.automount. Mar 17 18:19:16.603459 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:19:16.603492 systemd[1]: Reached target integritysetup.target. Mar 17 18:19:16.603522 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:19:16.603555 systemd[1]: Reached target remote-fs.target. Mar 17 18:19:16.603586 systemd[1]: Reached target slices.target. Mar 17 18:19:16.603617 systemd[1]: Reached target swap.target. Mar 17 18:19:16.603650 systemd[1]: Reached target torcx.target. Mar 17 18:19:16.603680 systemd[1]: Reached target veritysetup.target. Mar 17 18:19:16.603716 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:19:16.603770 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:19:16.603805 kernel: audit: type=1400 audit(1742235556.266:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:19:16.603839 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:19:16.603870 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:19:16.603901 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:19:16.603930 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:19:16.603963 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:19:16.603992 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:19:16.604024 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:19:16.604059 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:19:16.604089 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:19:16.604122 systemd[1]: Mounting media.mount... Mar 17 18:19:16.604155 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:19:16.604185 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:19:16.604218 systemd[1]: Mounting tmp.mount... Mar 17 18:19:16.604249 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:19:16.604282 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:19:16.604312 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:19:16.604348 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:19:16.604378 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:19:16.604408 systemd[1]: Starting modprobe@drm.service... Mar 17 18:19:16.604447 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:19:16.604477 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:19:16.604507 systemd[1]: Starting modprobe@loop.service... Mar 17 18:19:16.604540 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:19:16.604570 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:19:16.604615 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:19:16.604645 systemd[1]: Starting systemd-journald.service... Mar 17 18:19:16.604675 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:19:16.604704 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:19:16.604735 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:19:16.604892 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:19:16.604925 kernel: loop: module loaded Mar 17 18:19:16.604955 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:19:16.604988 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:19:16.605025 systemd[1]: Mounted media.mount. Mar 17 18:19:16.605058 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:19:16.605090 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:19:16.605118 kernel: fuse: init (API version 7.34) Mar 17 18:19:16.605150 systemd[1]: Mounted tmp.mount. Mar 17 18:19:16.605179 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:19:16.605209 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:19:16.605240 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:19:16.605269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:19:16.605299 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:19:16.605334 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:19:16.605363 systemd[1]: Finished modprobe@drm.service. Mar 17 18:19:16.605393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:19:16.605425 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:19:16.605454 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:19:16.605486 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:19:16.605516 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:19:16.605550 systemd[1]: Finished modprobe@loop.service. Mar 17 18:19:16.605580 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:19:16.605615 systemd-journald[1539]: Journal started Mar 17 18:19:16.605720 systemd-journald[1539]: Runtime Journal (/run/log/journal/ec209172b158edac6ebbcaa2e9641b26) is 8.0M, max 75.4M, 67.4M free. Mar 17 18:19:16.266000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:19:16.266000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:19:16.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.586000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:19:16.586000 audit[1539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe2bda9f0 a2=4000 a3=1 items=0 ppid=1 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:19:16.586000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:19:16.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.614944 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:19:16.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.622081 systemd[1]: Started systemd-journald.service. Mar 17 18:19:16.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.621616 systemd[1]: Reached target network-pre.target. Mar 17 18:19:16.626577 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:19:16.631387 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:19:16.633945 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:19:16.653654 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:19:16.660001 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:19:16.661705 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:19:16.668265 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:19:16.669944 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:19:16.673663 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:19:16.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.676128 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:19:16.677998 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:19:16.682191 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:19:16.693727 systemd-journald[1539]: Time spent on flushing to /var/log/journal/ec209172b158edac6ebbcaa2e9641b26 is 100.526ms for 1081 entries. Mar 17 18:19:16.693727 systemd-journald[1539]: System Journal (/var/log/journal/ec209172b158edac6ebbcaa2e9641b26) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:19:16.813991 systemd-journald[1539]: Received client request to flush runtime journal. Mar 17 18:19:16.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.715137 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:19:16.717047 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:19:16.747999 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:19:16.756586 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:19:16.768415 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:19:16.816921 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:19:16.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.834637 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:19:16.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.838805 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:19:16.862732 udevadm[1593]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:19:16.991898 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:19:16.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:16.996083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:19:17.146039 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:19:17.154986 kernel: kauditd_printk_skb: 27 callbacks suppressed Mar 17 18:19:17.155124 kernel: audit: type=1130 audit(1742235557.146:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:17.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:17.589070 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:19:17.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:17.597500 systemd[1]: Starting systemd-udevd.service... Mar 17 18:19:17.602077 kernel: audit: type=1130 audit(1742235557.589:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:17.637404 systemd-udevd[1599]: Using default interface naming scheme 'v252'. Mar 17 18:19:17.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:17.702617 systemd[1]: Started systemd-udevd.service. Mar 17 18:19:17.708017 systemd[1]: Starting systemd-networkd.service... Mar 17 18:19:17.715863 kernel: audit: type=1130 audit(1742235557.703:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:17.723612 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:19:17.823794 systemd[1]: Found device dev-ttyS0.device. Mar 17 18:19:17.846632 (udev-worker)[1615]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:19:17.853475 systemd[1]: Started systemd-userdbd.service. Mar 17 18:19:17.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:17.864175 kernel: audit: type=1130 audit(1742235557.853:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.057973 systemd-networkd[1604]: lo: Link UP Mar 17 18:19:18.057998 systemd-networkd[1604]: lo: Gained carrier Mar 17 18:19:18.058965 systemd-networkd[1604]: Enumeration completed Mar 17 18:19:18.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.059204 systemd[1]: Started systemd-networkd.service. Mar 17 18:19:18.063248 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:19:18.070127 kernel: audit: type=1130 audit(1742235558.059:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.069498 systemd-networkd[1604]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:19:18.075769 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:19:18.076093 systemd-networkd[1604]: eth0: Link UP Mar 17 18:19:18.076455 systemd-networkd[1604]: eth0: Gained carrier Mar 17 18:19:18.087002 systemd-networkd[1604]: eth0: DHCPv4 address 172.31.23.13/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 18:19:18.213121 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:19:18.215724 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:19:18.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.225785 kernel: audit: type=1130 audit(1742235558.216:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.231886 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:19:18.296016 lvm[1718]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:19:18.334474 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:19:18.336490 systemd[1]: Reached target cryptsetup.target. Mar 17 18:19:18.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.345797 kernel: audit: type=1130 audit(1742235558.335:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.347667 systemd[1]: Starting lvm2-activation.service... Mar 17 18:19:18.356803 lvm[1720]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:19:18.395444 systemd[1]: Finished lvm2-activation.service. Mar 17 18:19:18.397242 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:19:18.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.406320 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:19:18.407923 kernel: audit: type=1130 audit(1742235558.395:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.406367 systemd[1]: Reached target local-fs.target. Mar 17 18:19:18.407928 systemd[1]: Reached target machines.target. Mar 17 18:19:18.411898 systemd[1]: Starting ldconfig.service... Mar 17 18:19:18.415280 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:19:18.415443 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:19:18.418004 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:19:18.421903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:19:18.426483 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:19:18.432306 systemd[1]: Starting systemd-sysext.service... Mar 17 18:19:18.447172 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1723 (bootctl) Mar 17 18:19:18.449502 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:19:18.471265 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:19:18.481460 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:19:18.482029 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:19:18.516795 kernel: loop0: detected capacity change from 0 to 194096 Mar 17 18:19:18.532157 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:19:18.551635 kernel: audit: type=1130 audit(1742235558.530:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.604779 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:19:18.632911 kernel: loop1: detected capacity change from 0 to 194096 Mar 17 18:19:18.642843 systemd-fsck[1735]: fsck.fat 4.2 (2021-01-31) Mar 17 18:19:18.642843 systemd-fsck[1735]: /dev/nvme0n1p1: 236 files, 117179/258078 clusters Mar 17 18:19:18.648551 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:19:18.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.662559 systemd[1]: Mounting boot.mount... Mar 17 18:19:18.668256 kernel: audit: type=1130 audit(1742235558.649:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.676351 (sd-sysext)[1740]: Using extensions 'kubernetes'. Mar 17 18:19:18.680614 (sd-sysext)[1740]: Merged extensions into '/usr'. Mar 17 18:19:18.738320 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:19:18.741645 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:19:18.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.744156 systemd[1]: Mounted boot.mount. Mar 17 18:19:18.754920 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:19:18.757258 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:19:18.760339 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:19:18.764970 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:19:18.772375 systemd[1]: Starting modprobe@loop.service... Mar 17 18:19:18.776084 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:19:18.776417 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:19:18.789271 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:19:18.798356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:19:18.798799 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:19:18.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.802286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:19:18.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.802675 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:19:18.805268 systemd[1]: Finished systemd-sysext.service. Mar 17 18:19:18.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.808085 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:19:18.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.808543 systemd[1]: Finished modprobe@loop.service. Mar 17 18:19:18.814547 systemd[1]: Starting ensure-sysext.service... Mar 17 18:19:18.817139 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:19:18.817256 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:19:18.827474 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:19:18.841731 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:19:18.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:18.851650 systemd[1]: Reloading. Mar 17 18:19:18.864186 systemd-tmpfiles[1770]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:19:18.875288 systemd-tmpfiles[1770]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:19:18.887670 systemd-tmpfiles[1770]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:19:18.996524 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2025-03-17T18:19:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:19:18.996625 /usr/lib/systemd/system-generators/torcx-generator[1790]: time="2025-03-17T18:19:18Z" level=info msg="torcx already run" Mar 17 18:19:19.243384 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:19:19.243420 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:19:19.291260 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:19:19.438126 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:19:19.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.446643 systemd[1]: Starting audit-rules.service... Mar 17 18:19:19.451012 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:19:19.455864 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:19:19.469075 systemd[1]: Starting systemd-resolved.service... Mar 17 18:19:19.474348 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:19:19.482594 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:19:19.490437 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:19:19.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.501222 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:19:19.501000 audit[1860]: SYSTEM_BOOT pid=1860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.517916 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:19:19.521829 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:19:19.526461 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:19:19.531355 systemd[1]: Starting modprobe@loop.service... Mar 17 18:19:19.535404 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:19:19.536606 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:19:19.536939 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:19:19.546158 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:19:19.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.550097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:19:19.550542 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:19:19.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.559160 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:19:19.561638 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:19:19.563448 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:19:19.563716 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:19:19.563997 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:19:19.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.566447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:19:19.566934 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:19:19.569996 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:19:19.570375 systemd[1]: Finished modprobe@loop.service. Mar 17 18:19:19.573580 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:19:19.587038 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:19:19.593616 systemd[1]: Starting modprobe@drm.service... Mar 17 18:19:19.599687 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:19:19.608488 systemd[1]: Starting modprobe@loop.service... Mar 17 18:19:19.610270 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:19:19.610571 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:19:19.610960 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:19:19.614980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:19:19.615392 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:19:19.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.618430 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:19:19.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.618870 systemd[1]: Finished modprobe@drm.service. Mar 17 18:19:19.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.631778 systemd[1]: Finished ensure-sysext.service. Mar 17 18:19:19.647924 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:19:19.648302 systemd[1]: Finished modprobe@loop.service. Mar 17 18:19:19.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.650187 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:19:19.651785 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:19:19.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.660561 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:19:19.661010 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:19:19.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:19:19.662948 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:19:19.775000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:19:19.775000 audit[1893]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc5ffacb0 a2=420 a3=0 items=0 ppid=1854 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:19:19.775000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:19:19.778224 augenrules[1893]: No rules Mar 17 18:19:19.780160 systemd[1]: Finished audit-rules.service. Mar 17 18:19:19.782339 systemd-resolved[1857]: Positive Trust Anchors: Mar 17 18:19:19.782354 systemd-resolved[1857]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:19:19.782405 systemd-resolved[1857]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:19:19.806087 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:19:19.808046 systemd[1]: Reached target time-set.target. Mar 17 18:19:19.840490 systemd-resolved[1857]: Defaulting to hostname 'linux'. Mar 17 18:19:19.843501 systemd[1]: Started systemd-resolved.service. Mar 17 18:19:19.845319 systemd[1]: Reached target network.target. Mar 17 18:19:19.846892 systemd[1]: Reached target nss-lookup.target. Mar 17 18:19:19.891379 systemd-timesyncd[1858]: Contacted time server 38.81.211.177:123 (0.flatcar.pool.ntp.org). Mar 17 18:19:19.891505 systemd-timesyncd[1858]: Initial clock synchronization to Mon 2025-03-17 18:19:19.502549 UTC. Mar 17 18:19:19.998891 systemd-networkd[1604]: eth0: Gained IPv6LL Mar 17 18:19:20.001903 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:19:20.003866 systemd[1]: Reached target network-online.target. Mar 17 18:19:20.091722 ldconfig[1722]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:19:20.101730 systemd[1]: Finished ldconfig.service. Mar 17 18:19:20.105714 systemd[1]: Starting systemd-update-done.service... Mar 17 18:19:20.121769 systemd[1]: Finished systemd-update-done.service. Mar 17 18:19:20.123560 systemd[1]: Reached target sysinit.target. Mar 17 18:19:20.125198 systemd[1]: Started motdgen.path. Mar 17 18:19:20.126533 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:19:20.129056 systemd[1]: Started logrotate.timer. Mar 17 18:19:20.130865 systemd[1]: Started mdadm.timer. Mar 17 18:19:20.132123 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:19:20.133650 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:19:20.133719 systemd[1]: Reached target paths.target. Mar 17 18:19:20.135087 systemd[1]: Reached target timers.target. Mar 17 18:19:20.136946 systemd[1]: Listening on dbus.socket. Mar 17 18:19:20.140440 systemd[1]: Starting docker.socket... Mar 17 18:19:20.144847 systemd[1]: Listening on sshd.socket. Mar 17 18:19:20.147140 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:19:20.148066 systemd[1]: Listening on docker.socket. Mar 17 18:19:20.149955 systemd[1]: Reached target sockets.target. Mar 17 18:19:20.151905 systemd[1]: Reached target basic.target. Mar 17 18:19:20.153563 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:19:20.153652 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:19:20.153701 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:19:20.156042 systemd[1]: Started amazon-ssm-agent.service. Mar 17 18:19:20.160274 systemd[1]: Starting containerd.service... Mar 17 18:19:20.163630 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 18:19:20.168483 systemd[1]: Starting dbus.service... Mar 17 18:19:20.175564 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:19:20.189377 systemd[1]: Starting extend-filesystems.service... Mar 17 18:19:20.191487 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:19:20.263843 jq[1910]: false Mar 17 18:19:20.197609 systemd[1]: Starting kubelet.service... Mar 17 18:19:20.206545 systemd[1]: Starting motdgen.service... Mar 17 18:19:20.224043 systemd[1]: Started nvidia.service. Mar 17 18:19:20.241972 systemd[1]: Starting prepare-helm.service... Mar 17 18:19:20.247690 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:19:20.259271 systemd[1]: Starting sshd-keygen.service... Mar 17 18:19:20.267350 systemd[1]: Starting systemd-logind.service... Mar 17 18:19:20.271931 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:19:20.272143 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:19:20.279034 systemd[1]: Starting update-engine.service... Mar 17 18:19:20.300307 jq[1928]: true Mar 17 18:19:20.282944 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:19:20.291244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:19:20.291856 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:19:20.304400 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:19:20.305165 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:19:20.383767 jq[1935]: true Mar 17 18:19:20.412991 tar[1933]: linux-arm64/helm Mar 17 18:19:20.430365 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:19:20.430891 systemd[1]: Finished motdgen.service. Mar 17 18:19:20.443125 extend-filesystems[1912]: Found loop1 Mar 17 18:19:20.444961 extend-filesystems[1912]: Found nvme0n1 Mar 17 18:19:20.444961 extend-filesystems[1912]: Found nvme0n1p1 Mar 17 18:19:20.444961 extend-filesystems[1912]: Found nvme0n1p2 Mar 17 18:19:20.444961 extend-filesystems[1912]: Found nvme0n1p3 Mar 17 18:19:20.444961 extend-filesystems[1912]: Found usr Mar 17 18:19:20.462937 extend-filesystems[1912]: Found nvme0n1p4 Mar 17 18:19:20.462937 extend-filesystems[1912]: Found nvme0n1p6 Mar 17 18:19:20.462937 extend-filesystems[1912]: Found nvme0n1p7 Mar 17 18:19:20.462937 extend-filesystems[1912]: Found nvme0n1p9 Mar 17 18:19:20.462937 extend-filesystems[1912]: Checking size of /dev/nvme0n1p9 Mar 17 18:19:20.543220 dbus-daemon[1909]: [system] SELinux support is enabled Mar 17 18:19:20.543520 systemd[1]: Started dbus.service. Mar 17 18:19:20.548106 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:19:20.548144 systemd[1]: Reached target system-config.target. Mar 17 18:19:20.557009 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:19:20.557054 systemd[1]: Reached target user-config.target. Mar 17 18:19:20.581884 dbus-daemon[1909]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1604 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 18:19:20.590843 systemd[1]: Starting systemd-hostnamed.service... Mar 17 18:19:20.599867 extend-filesystems[1912]: Resized partition /dev/nvme0n1p9 Mar 17 18:19:20.602344 extend-filesystems[1980]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:19:20.645982 amazon-ssm-agent[1906]: 2025/03/17 18:19:20 Failed to load instance info from vault. RegistrationKey does not exist. Mar 17 18:19:20.648778 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 18:19:20.652958 amazon-ssm-agent[1906]: Initializing new seelog logger Mar 17 18:19:20.655145 amazon-ssm-agent[1906]: New Seelog Logger Creation Complete Mar 17 18:19:20.658453 amazon-ssm-agent[1906]: 2025/03/17 18:19:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 18:19:20.658919 amazon-ssm-agent[1906]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 18:19:20.659346 amazon-ssm-agent[1906]: 2025/03/17 18:19:20 processing appconfig overrides Mar 17 18:19:20.707442 update_engine[1926]: I0317 18:19:20.707019 1926 main.cc:92] Flatcar Update Engine starting Mar 17 18:19:20.720046 systemd[1]: Started update-engine.service. Mar 17 18:19:20.725856 systemd[1]: Started locksmithd.service. Mar 17 18:19:20.730036 update_engine[1926]: I0317 18:19:20.729104 1926 update_check_scheduler.cc:74] Next update check in 10m44s Mar 17 18:19:20.741363 env[1938]: time="2025-03-17T18:19:20.740699454Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:19:20.745787 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 18:19:20.760930 bash[1985]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:19:20.762245 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:19:20.768480 extend-filesystems[1980]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 18:19:20.768480 extend-filesystems[1980]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:19:20.768480 extend-filesystems[1980]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 18:19:20.765673 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:19:20.795523 extend-filesystems[1912]: Resized filesystem in /dev/nvme0n1p9 Mar 17 18:19:20.766208 systemd[1]: Finished extend-filesystems.service. Mar 17 18:19:20.870050 systemd-logind[1925]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:19:20.871237 systemd-logind[1925]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 18:19:20.871680 systemd-logind[1925]: New seat seat0. Mar 17 18:19:20.884169 systemd[1]: Started systemd-logind.service. Mar 17 18:19:20.908547 systemd[1]: nvidia.service: Deactivated successfully. Mar 17 18:19:20.961214 env[1938]: time="2025-03-17T18:19:20.961136089Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:19:20.961465 env[1938]: time="2025-03-17T18:19:20.961412107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:19:20.984169 env[1938]: time="2025-03-17T18:19:20.983994479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:19:20.984364 env[1938]: time="2025-03-17T18:19:20.984326006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:19:20.985058 env[1938]: time="2025-03-17T18:19:20.985013734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:19:20.985304 env[1938]: time="2025-03-17T18:19:20.985272341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:19:20.985430 env[1938]: time="2025-03-17T18:19:20.985400497Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:19:20.985535 env[1938]: time="2025-03-17T18:19:20.985508000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:19:20.985854 env[1938]: time="2025-03-17T18:19:20.985821340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:19:20.986450 env[1938]: time="2025-03-17T18:19:20.986403814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:19:20.988259 env[1938]: time="2025-03-17T18:19:20.988198503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:19:20.988427 env[1938]: time="2025-03-17T18:19:20.988398598Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:19:20.992374 env[1938]: time="2025-03-17T18:19:20.989196762Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:19:20.992599 env[1938]: time="2025-03-17T18:19:20.992560060Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:19:21.001608 env[1938]: time="2025-03-17T18:19:21.001532429Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:19:21.001880 env[1938]: time="2025-03-17T18:19:21.001819121Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:19:21.002036 env[1938]: time="2025-03-17T18:19:21.002005522Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:19:21.002316 env[1938]: time="2025-03-17T18:19:21.002279912Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.002484 env[1938]: time="2025-03-17T18:19:21.002452916Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.002610 env[1938]: time="2025-03-17T18:19:21.002580879Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.002783 env[1938]: time="2025-03-17T18:19:21.002753239Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.003478 env[1938]: time="2025-03-17T18:19:21.003393732Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.003655 env[1938]: time="2025-03-17T18:19:21.003624725Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.003838 env[1938]: time="2025-03-17T18:19:21.003809379Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.003954 env[1938]: time="2025-03-17T18:19:21.003926817Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.004086 env[1938]: time="2025-03-17T18:19:21.004057779Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:19:21.004387 env[1938]: time="2025-03-17T18:19:21.004357528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:19:21.004642 env[1938]: time="2025-03-17T18:19:21.004613879Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:19:21.005396 env[1938]: time="2025-03-17T18:19:21.005355047Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:19:21.005581 env[1938]: time="2025-03-17T18:19:21.005552121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.005775 env[1938]: time="2025-03-17T18:19:21.005720575Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:19:21.005999 env[1938]: time="2025-03-17T18:19:21.005966332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.006137 env[1938]: time="2025-03-17T18:19:21.006094491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.006314 env[1938]: time="2025-03-17T18:19:21.006280868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.006435 env[1938]: time="2025-03-17T18:19:21.006407372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.006588 env[1938]: time="2025-03-17T18:19:21.006557913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.006754 env[1938]: time="2025-03-17T18:19:21.006710866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.006916 env[1938]: time="2025-03-17T18:19:21.006852962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.007074 env[1938]: time="2025-03-17T18:19:21.007045900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.007224 env[1938]: time="2025-03-17T18:19:21.007183526Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:19:21.007595 env[1938]: time="2025-03-17T18:19:21.007565198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.009858 env[1938]: time="2025-03-17T18:19:21.009808406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.010071 env[1938]: time="2025-03-17T18:19:21.010039812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.010204 env[1938]: time="2025-03-17T18:19:21.010165478Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:19:21.010508 env[1938]: time="2025-03-17T18:19:21.010473740Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:19:21.010810 env[1938]: time="2025-03-17T18:19:21.010767410Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:19:21.011081 env[1938]: time="2025-03-17T18:19:21.010937725Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:19:21.011382 env[1938]: time="2025-03-17T18:19:21.011353786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:19:21.013007 env[1938]: time="2025-03-17T18:19:21.012850127Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:19:21.014716 env[1938]: time="2025-03-17T18:19:21.013455519Z" level=info msg="Connect containerd service" Mar 17 18:19:21.014716 env[1938]: time="2025-03-17T18:19:21.013541601Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:19:21.024568 env[1938]: time="2025-03-17T18:19:21.024513124Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:19:21.026973 env[1938]: time="2025-03-17T18:19:21.026889477Z" level=info msg="Start subscribing containerd event" Mar 17 18:19:21.027109 env[1938]: time="2025-03-17T18:19:21.026983016Z" level=info msg="Start recovering state" Mar 17 18:19:21.027169 env[1938]: time="2025-03-17T18:19:21.027102477Z" level=info msg="Start event monitor" Mar 17 18:19:21.027169 env[1938]: time="2025-03-17T18:19:21.027135579Z" level=info msg="Start snapshots syncer" Mar 17 18:19:21.027169 env[1938]: time="2025-03-17T18:19:21.027159317Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:19:21.027327 env[1938]: time="2025-03-17T18:19:21.027183251Z" level=info msg="Start streaming server" Mar 17 18:19:21.027932 env[1938]: time="2025-03-17T18:19:21.027891477Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:19:21.030323 env[1938]: time="2025-03-17T18:19:21.030280641Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:19:21.030867 systemd[1]: Started containerd.service. Mar 17 18:19:21.034062 env[1938]: time="2025-03-17T18:19:21.034019926Z" level=info msg="containerd successfully booted in 0.328807s" Mar 17 18:19:21.099311 dbus-daemon[1909]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 18:19:21.099728 systemd[1]: Started systemd-hostnamed.service. Mar 17 18:19:21.103490 dbus-daemon[1909]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1979 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 18:19:21.143717 systemd[1]: Starting polkit.service... Mar 17 18:19:21.170133 polkitd[2026]: Started polkitd version 121 Mar 17 18:19:21.204182 polkitd[2026]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 18:19:21.204294 polkitd[2026]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 18:19:21.208771 polkitd[2026]: Finished loading, compiling and executing 2 rules Mar 17 18:19:21.209561 dbus-daemon[1909]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 18:19:21.209842 systemd[1]: Started polkit.service. Mar 17 18:19:21.212856 polkitd[2026]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 18:19:21.248410 systemd-hostnamed[1979]: Hostname set to (transient) Mar 17 18:19:21.248582 systemd-resolved[1857]: System hostname changed to 'ip-172-31-23-13'. Mar 17 18:19:21.379522 coreos-metadata[1908]: Mar 17 18:19:21.379 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 18:19:21.385124 coreos-metadata[1908]: Mar 17 18:19:21.385 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Mar 17 18:19:21.386871 coreos-metadata[1908]: Mar 17 18:19:21.386 INFO Fetch successful Mar 17 18:19:21.386871 coreos-metadata[1908]: Mar 17 18:19:21.386 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 18:19:21.388058 coreos-metadata[1908]: Mar 17 18:19:21.387 INFO Fetch successful Mar 17 18:19:21.391320 unknown[1908]: wrote ssh authorized keys file for user: core Mar 17 18:19:21.414594 update-ssh-keys[2075]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:19:21.415789 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 18:19:21.620542 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Create new startup processor Mar 17 18:19:21.623282 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [LongRunningPluginsManager] registered plugins: {} Mar 17 18:19:21.626135 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Initializing bookkeeping folders Mar 17 18:19:21.626270 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO removing the completed state files Mar 17 18:19:21.626270 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Initializing bookkeeping folders for long running plugins Mar 17 18:19:21.626270 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Mar 17 18:19:21.626433 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Initializing healthcheck folders for long running plugins Mar 17 18:19:21.626433 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Initializing locations for inventory plugin Mar 17 18:19:21.626433 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Initializing default location for custom inventory Mar 17 18:19:21.626433 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Initializing default location for file inventory Mar 17 18:19:21.626621 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Initializing default location for role inventory Mar 17 18:19:21.626621 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Init the cloudwatchlogs publisher Mar 17 18:19:21.626621 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:refreshAssociation Mar 17 18:19:21.626621 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:configurePackage Mar 17 18:19:21.626853 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:downloadContent Mar 17 18:19:21.626853 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:configureDocker Mar 17 18:19:21.626853 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:runDockerAction Mar 17 18:19:21.626853 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:updateSsmAgent Mar 17 18:19:21.626853 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:runDocument Mar 17 18:19:21.627080 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:softwareInventory Mar 17 18:19:21.627080 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform independent plugin aws:runPowerShellScript Mar 17 18:19:21.627080 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Successfully loaded platform dependent plugin aws:runShellScript Mar 17 18:19:21.627080 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Mar 17 18:19:21.627080 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO OS: linux, Arch: arm64 Mar 17 18:19:21.635930 amazon-ssm-agent[1906]: datastore file /var/lib/amazon/ssm/i-00c457e3e2df09c86/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Mar 17 18:19:21.723908 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] Starting document processing engine... Mar 17 18:19:21.819387 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] [EngineProcessor] Starting Mar 17 18:19:21.913733 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Mar 17 18:19:22.008191 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] Starting message polling Mar 17 18:19:22.102930 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] Starting send replies to MDS Mar 17 18:19:22.123855 tar[1933]: linux-arm64/LICENSE Mar 17 18:19:22.124452 tar[1933]: linux-arm64/README.md Mar 17 18:19:22.140738 systemd[1]: Finished prepare-helm.service. Mar 17 18:19:22.197967 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [instanceID=i-00c457e3e2df09c86] Starting association polling Mar 17 18:19:22.293093 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Mar 17 18:19:22.371953 locksmithd[1999]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:19:22.388373 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] [Association] Launching response handler Mar 17 18:19:22.483897 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Mar 17 18:19:22.579510 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Mar 17 18:19:22.675365 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Mar 17 18:19:22.771476 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] Starting session document processing engine... Mar 17 18:19:22.867673 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] [EngineProcessor] Starting Mar 17 18:19:22.888862 systemd[1]: Started kubelet.service. Mar 17 18:19:22.964150 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Mar 17 18:19:23.060859 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-00c457e3e2df09c86, requestId: 4cf1cfdc-26bb-49ee-8285-c4ae669f534c Mar 17 18:19:23.157669 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [OfflineService] Starting document processing engine... Mar 17 18:19:23.254707 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [OfflineService] [EngineProcessor] Starting Mar 17 18:19:23.352034 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [OfflineService] [EngineProcessor] Initial processing Mar 17 18:19:23.449437 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [OfflineService] Starting message polling Mar 17 18:19:23.547142 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [OfflineService] Starting send replies to MDS Mar 17 18:19:23.645080 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] listening reply. Mar 17 18:19:23.743058 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [LongRunningPluginsManager] starting long running plugin manager Mar 17 18:19:23.781279 sshd_keygen[1957]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:19:23.819187 systemd[1]: Finished sshd-keygen.service. Mar 17 18:19:23.824100 systemd[1]: Starting issuegen.service... Mar 17 18:19:23.847306 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Mar 17 18:19:23.838438 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:19:23.838989 systemd[1]: Finished issuegen.service. Mar 17 18:19:23.844426 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:19:23.860349 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:19:23.865616 systemd[1]: Started getty@tty1.service. Mar 17 18:19:23.869992 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:19:23.874303 systemd[1]: Reached target getty.target. Mar 17 18:19:23.876698 systemd[1]: Reached target multi-user.target. Mar 17 18:19:23.881270 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:19:23.899786 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:19:23.900313 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:19:23.903195 systemd[1]: Startup finished in 9.991s (kernel) + 12.439s (userspace) = 22.431s. Mar 17 18:19:23.939821 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [HealthCheck] HealthCheck reporting agent health. Mar 17 18:19:23.996988 kubelet[2141]: E0317 18:19:23.996929 2141 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:19:24.000625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:19:24.001079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:19:24.038482 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Mar 17 18:19:24.137219 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [StartupProcessor] Executing startup processor tasks Mar 17 18:19:24.236207 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Mar 17 18:19:24.335394 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Mar 17 18:19:24.434796 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.7 Mar 17 18:19:24.534429 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-00c457e3e2df09c86?role=subscribe&stream=input Mar 17 18:19:24.634157 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-00c457e3e2df09c86?role=subscribe&stream=input Mar 17 18:19:24.734199 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] Starting receiving message from control channel Mar 17 18:19:24.834506 amazon-ssm-agent[1906]: 2025-03-17 18:19:21 INFO [MessageGatewayService] [EngineProcessor] Initial processing Mar 17 18:19:28.560302 systemd[1]: Created slice system-sshd.slice. Mar 17 18:19:28.562688 systemd[1]: Started sshd@0-172.31.23.13:22-139.178.89.65:37510.service. Mar 17 18:19:28.836913 sshd[2167]: Accepted publickey for core from 139.178.89.65 port 37510 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:19:28.843463 sshd[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:19:28.862486 systemd[1]: Created slice user-500.slice. Mar 17 18:19:28.865446 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:19:28.874867 systemd-logind[1925]: New session 1 of user core. Mar 17 18:19:28.889208 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:19:28.891720 systemd[1]: Starting user@500.service... Mar 17 18:19:28.901585 (systemd)[2171]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:19:29.083657 systemd[2171]: Queued start job for default target default.target. Mar 17 18:19:29.084964 systemd[2171]: Reached target paths.target. Mar 17 18:19:29.085020 systemd[2171]: Reached target sockets.target. Mar 17 18:19:29.085052 systemd[2171]: Reached target timers.target. Mar 17 18:19:29.085082 systemd[2171]: Reached target basic.target. Mar 17 18:19:29.085179 systemd[2171]: Reached target default.target. Mar 17 18:19:29.085245 systemd[2171]: Startup finished in 171ms. Mar 17 18:19:29.085299 systemd[1]: Started user@500.service. Mar 17 18:19:29.087317 systemd[1]: Started session-1.scope. Mar 17 18:19:29.231419 systemd[1]: Started sshd@1-172.31.23.13:22-139.178.89.65:37526.service. Mar 17 18:19:29.404403 sshd[2181]: Accepted publickey for core from 139.178.89.65 port 37526 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:19:29.406878 sshd[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:19:29.414262 systemd-logind[1925]: New session 2 of user core. Mar 17 18:19:29.416254 systemd[1]: Started session-2.scope. Mar 17 18:19:29.544207 sshd[2181]: pam_unix(sshd:session): session closed for user core Mar 17 18:19:29.549163 systemd[1]: sshd@1-172.31.23.13:22-139.178.89.65:37526.service: Deactivated successfully. Mar 17 18:19:29.550519 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:19:29.553100 systemd-logind[1925]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:19:29.555452 systemd-logind[1925]: Removed session 2. Mar 17 18:19:29.570336 systemd[1]: Started sshd@2-172.31.23.13:22-139.178.89.65:37528.service. Mar 17 18:19:29.745395 sshd[2188]: Accepted publickey for core from 139.178.89.65 port 37528 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:19:29.748409 sshd[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:19:29.757027 systemd[1]: Started session-3.scope. Mar 17 18:19:29.759368 systemd-logind[1925]: New session 3 of user core. Mar 17 18:19:29.882897 sshd[2188]: pam_unix(sshd:session): session closed for user core Mar 17 18:19:29.888366 systemd-logind[1925]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:19:29.890348 systemd[1]: sshd@2-172.31.23.13:22-139.178.89.65:37528.service: Deactivated successfully. Mar 17 18:19:29.891845 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:19:29.893173 systemd-logind[1925]: Removed session 3. Mar 17 18:19:29.908795 systemd[1]: Started sshd@3-172.31.23.13:22-139.178.89.65:37536.service. Mar 17 18:19:30.084077 sshd[2195]: Accepted publickey for core from 139.178.89.65 port 37536 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:19:30.087035 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:19:30.095516 systemd[1]: Started session-4.scope. Mar 17 18:19:30.096173 systemd-logind[1925]: New session 4 of user core. Mar 17 18:19:30.228305 sshd[2195]: pam_unix(sshd:session): session closed for user core Mar 17 18:19:30.233498 systemd[1]: sshd@3-172.31.23.13:22-139.178.89.65:37536.service: Deactivated successfully. Mar 17 18:19:30.235352 systemd-logind[1925]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:19:30.235503 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:19:30.237885 systemd-logind[1925]: Removed session 4. Mar 17 18:19:30.253398 systemd[1]: Started sshd@4-172.31.23.13:22-139.178.89.65:37544.service. Mar 17 18:19:30.425581 sshd[2202]: Accepted publickey for core from 139.178.89.65 port 37544 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:19:30.428585 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:19:30.437522 systemd[1]: Started session-5.scope. Mar 17 18:19:30.439720 systemd-logind[1925]: New session 5 of user core. Mar 17 18:19:30.581329 sudo[2206]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:19:30.582382 sudo[2206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:19:30.628934 systemd[1]: Starting docker.service... Mar 17 18:19:30.699136 amazon-ssm-agent[1906]: 2025-03-17 18:19:30 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Mar 17 18:19:30.716634 env[2216]: time="2025-03-17T18:19:30.716548070Z" level=info msg="Starting up" Mar 17 18:19:30.718890 env[2216]: time="2025-03-17T18:19:30.718848308Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:19:30.719092 env[2216]: time="2025-03-17T18:19:30.719063087Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:19:30.719226 env[2216]: time="2025-03-17T18:19:30.719194881Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:19:30.719331 env[2216]: time="2025-03-17T18:19:30.719305540Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:19:30.722403 env[2216]: time="2025-03-17T18:19:30.722361522Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:19:30.722595 env[2216]: time="2025-03-17T18:19:30.722567653Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:19:30.722753 env[2216]: time="2025-03-17T18:19:30.722709078Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:19:30.722871 env[2216]: time="2025-03-17T18:19:30.722844640Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:19:31.144111 env[2216]: time="2025-03-17T18:19:31.143961796Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:19:31.144111 env[2216]: time="2025-03-17T18:19:31.144031946Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:19:31.145084 env[2216]: time="2025-03-17T18:19:31.144998998Z" level=info msg="Loading containers: start." Mar 17 18:19:31.433791 kernel: Initializing XFRM netlink socket Mar 17 18:19:31.512431 env[2216]: time="2025-03-17T18:19:31.512362806Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:19:31.515501 (udev-worker)[2226]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:19:31.627163 systemd-networkd[1604]: docker0: Link UP Mar 17 18:19:31.646233 env[2216]: time="2025-03-17T18:19:31.646188493Z" level=info msg="Loading containers: done." Mar 17 18:19:31.671355 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3318610966-merged.mount: Deactivated successfully. Mar 17 18:19:31.680908 env[2216]: time="2025-03-17T18:19:31.680838342Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:19:31.681320 env[2216]: time="2025-03-17T18:19:31.681270589Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:19:31.681571 env[2216]: time="2025-03-17T18:19:31.681528267Z" level=info msg="Daemon has completed initialization" Mar 17 18:19:31.707669 systemd[1]: Started docker.service. Mar 17 18:19:31.723603 env[2216]: time="2025-03-17T18:19:31.723499295Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:19:33.072249 env[1938]: time="2025-03-17T18:19:33.072144733Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:19:33.727228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710330705.mount: Deactivated successfully. Mar 17 18:19:34.190883 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:19:34.191211 systemd[1]: Stopped kubelet.service. Mar 17 18:19:34.194865 systemd[1]: Starting kubelet.service... Mar 17 18:19:34.528299 systemd[1]: Started kubelet.service. Mar 17 18:19:34.659502 kubelet[2355]: E0317 18:19:34.659416 2355 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:19:34.666126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:19:34.666561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:19:36.041485 env[1938]: time="2025-03-17T18:19:36.041425384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:36.043978 env[1938]: time="2025-03-17T18:19:36.043928639Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:36.047785 env[1938]: time="2025-03-17T18:19:36.047658072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:36.057661 env[1938]: time="2025-03-17T18:19:36.057577456Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 18:19:36.060121 env[1938]: time="2025-03-17T18:19:36.060034645Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:36.074666 env[1938]: time="2025-03-17T18:19:36.074572395Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:19:38.533251 env[1938]: time="2025-03-17T18:19:38.533180534Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:38.536605 env[1938]: time="2025-03-17T18:19:38.536548096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:38.539974 env[1938]: time="2025-03-17T18:19:38.539908096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:38.544516 env[1938]: time="2025-03-17T18:19:38.544442742Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:38.546161 env[1938]: time="2025-03-17T18:19:38.546109749Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 18:19:38.565757 env[1938]: time="2025-03-17T18:19:38.565676659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:19:40.281348 env[1938]: time="2025-03-17T18:19:40.281276391Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:40.285764 env[1938]: time="2025-03-17T18:19:40.285677592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:40.289261 env[1938]: time="2025-03-17T18:19:40.289201420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:40.292778 env[1938]: time="2025-03-17T18:19:40.292697693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:40.294737 env[1938]: time="2025-03-17T18:19:40.294659770Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 18:19:40.311437 env[1938]: time="2025-03-17T18:19:40.311384998Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:19:41.662778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789492749.mount: Deactivated successfully. Mar 17 18:19:42.466559 env[1938]: time="2025-03-17T18:19:42.466498526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:42.468560 env[1938]: time="2025-03-17T18:19:42.468489863Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:42.470899 env[1938]: time="2025-03-17T18:19:42.470833941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:42.472902 env[1938]: time="2025-03-17T18:19:42.472856721Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:42.473919 env[1938]: time="2025-03-17T18:19:42.473876620Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 18:19:42.491981 env[1938]: time="2025-03-17T18:19:42.491909604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:19:43.026799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount241900004.mount: Deactivated successfully. Mar 17 18:19:44.356937 env[1938]: time="2025-03-17T18:19:44.356869315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:44.361090 env[1938]: time="2025-03-17T18:19:44.361032928Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:44.366847 env[1938]: time="2025-03-17T18:19:44.366792954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:44.381014 env[1938]: time="2025-03-17T18:19:44.380953107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:44.382535 env[1938]: time="2025-03-17T18:19:44.382480882Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:19:44.401924 env[1938]: time="2025-03-17T18:19:44.401823237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:19:44.690847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:19:44.691190 systemd[1]: Stopped kubelet.service. Mar 17 18:19:44.694008 systemd[1]: Starting kubelet.service... Mar 17 18:19:44.990418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780359941.mount: Deactivated successfully. Mar 17 18:19:45.011235 env[1938]: time="2025-03-17T18:19:45.010505719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:45.020344 env[1938]: time="2025-03-17T18:19:45.020256539Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:45.023055 env[1938]: time="2025-03-17T18:19:45.022976596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:45.025511 env[1938]: time="2025-03-17T18:19:45.025458612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:45.026732 env[1938]: time="2025-03-17T18:19:45.026661981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 18:19:45.050148 env[1938]: time="2025-03-17T18:19:45.050095277Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:19:45.106065 systemd[1]: Started kubelet.service. Mar 17 18:19:45.189955 kubelet[2402]: E0317 18:19:45.189873 2402 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:19:45.195569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:19:45.196003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:19:45.655475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172235605.mount: Deactivated successfully. Mar 17 18:19:49.255452 env[1938]: time="2025-03-17T18:19:49.255379284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:49.258926 env[1938]: time="2025-03-17T18:19:49.258831107Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:49.263446 env[1938]: time="2025-03-17T18:19:49.263383896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:49.268026 env[1938]: time="2025-03-17T18:19:49.267959797Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:49.270008 env[1938]: time="2025-03-17T18:19:49.269958259Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 18:19:51.282475 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 18:19:55.071071 systemd[1]: Stopped kubelet.service. Mar 17 18:19:55.076186 systemd[1]: Starting kubelet.service... Mar 17 18:19:55.121319 systemd[1]: Reloading. Mar 17 18:19:55.321704 /usr/lib/systemd/system-generators/torcx-generator[2501]: time="2025-03-17T18:19:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:19:55.322236 /usr/lib/systemd/system-generators/torcx-generator[2501]: time="2025-03-17T18:19:55Z" level=info msg="torcx already run" Mar 17 18:19:55.534963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:19:55.535209 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:19:55.581575 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:19:55.796809 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:19:55.797236 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:19:55.798006 systemd[1]: Stopped kubelet.service. Mar 17 18:19:55.802267 systemd[1]: Starting kubelet.service... Mar 17 18:19:56.104129 systemd[1]: Started kubelet.service. Mar 17 18:19:56.197968 kubelet[2577]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:19:56.198558 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:19:56.198667 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:19:56.198993 kubelet[2577]: I0317 18:19:56.198930 2577 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:19:58.192152 kubelet[2577]: I0317 18:19:58.192106 2577 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:19:58.192794 kubelet[2577]: I0317 18:19:58.192770 2577 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:19:58.193705 kubelet[2577]: I0317 18:19:58.193667 2577 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:19:58.241542 kubelet[2577]: E0317 18:19:58.241490 2577 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.242335 kubelet[2577]: I0317 18:19:58.242303 2577 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:19:58.260188 kubelet[2577]: I0317 18:19:58.260153 2577 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:19:58.264518 kubelet[2577]: I0317 18:19:58.264452 2577 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:19:58.264985 kubelet[2577]: I0317 18:19:58.264675 2577 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:19:58.265235 kubelet[2577]: I0317 18:19:58.265212 2577 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:19:58.265373 kubelet[2577]: I0317 18:19:58.265354 2577 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:19:58.265703 kubelet[2577]: I0317 18:19:58.265683 2577 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:19:58.268075 kubelet[2577]: I0317 18:19:58.268015 2577 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:19:58.268345 kubelet[2577]: I0317 18:19:58.268290 2577 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:19:58.268435 kubelet[2577]: I0317 18:19:58.268379 2577 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:19:58.268515 kubelet[2577]: I0317 18:19:58.268453 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:19:58.270533 kubelet[2577]: I0317 18:19:58.270393 2577 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:19:58.271136 kubelet[2577]: W0317 18:19:58.271051 2577 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.271260 kubelet[2577]: E0317 18:19:58.271147 2577 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.271511 kubelet[2577]: I0317 18:19:58.271464 2577 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:19:58.271598 kubelet[2577]: W0317 18:19:58.271569 2577 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:19:58.272680 kubelet[2577]: I0317 18:19:58.272629 2577 server.go:1264] "Started kubelet" Mar 17 18:19:58.285928 kubelet[2577]: E0317 18:19:58.285706 2577 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.13:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-13.182daa0bbe1400c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-13,UID:ip-172-31-23-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-13,},FirstTimestamp:2025-03-17 18:19:58.272594112 +0000 UTC m=+2.148453447,LastTimestamp:2025-03-17 18:19:58.272594112 +0000 UTC m=+2.148453447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-13,}" Mar 17 18:19:58.286432 kubelet[2577]: W0317 18:19:58.286374 2577 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-13&limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.286609 kubelet[2577]: E0317 18:19:58.286586 2577 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-13&limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.290977 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:19:58.291286 kubelet[2577]: I0317 18:19:58.291236 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:19:58.297474 kubelet[2577]: I0317 18:19:58.297429 2577 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:19:58.298416 kubelet[2577]: I0317 18:19:58.298382 2577 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:19:58.300354 kubelet[2577]: I0317 18:19:58.300327 2577 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:19:58.300586 kubelet[2577]: I0317 18:19:58.300527 2577 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:19:58.302766 kubelet[2577]: I0317 18:19:58.302652 2577 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:19:58.304000 kubelet[2577]: I0317 18:19:58.303966 2577 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:19:58.304202 kubelet[2577]: E0317 18:19:58.304142 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-13?timeout=10s\": dial tcp 172.31.23.13:6443: connect: connection refused" interval="200ms" Mar 17 18:19:58.304454 kubelet[2577]: W0317 18:19:58.304020 2577 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.304605 kubelet[2577]: I0317 18:19:58.304566 2577 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:19:58.304773 kubelet[2577]: I0317 18:19:58.304715 2577 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:19:58.305301 kubelet[2577]: E0317 18:19:58.304733 2577 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.306382 kubelet[2577]: I0317 18:19:58.303137 2577 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:19:58.308343 kubelet[2577]: E0317 18:19:58.308284 2577 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:19:58.312816 kubelet[2577]: I0317 18:19:58.312714 2577 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:19:58.363589 kubelet[2577]: I0317 18:19:58.363533 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:19:58.368258 kubelet[2577]: I0317 18:19:58.368215 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:19:58.368494 kubelet[2577]: I0317 18:19:58.368471 2577 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:19:58.368695 kubelet[2577]: I0317 18:19:58.368662 2577 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:19:58.368924 kubelet[2577]: E0317 18:19:58.368868 2577 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:19:58.370184 kubelet[2577]: W0317 18:19:58.370098 2577 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.370330 kubelet[2577]: E0317 18:19:58.370197 2577 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:58.374990 kubelet[2577]: I0317 18:19:58.374942 2577 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:19:58.374990 kubelet[2577]: I0317 18:19:58.374975 2577 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:19:58.375213 kubelet[2577]: I0317 18:19:58.375009 2577 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:19:58.379438 kubelet[2577]: I0317 18:19:58.379379 2577 policy_none.go:49] "None policy: Start" Mar 17 18:19:58.380700 kubelet[2577]: I0317 18:19:58.380663 2577 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:19:58.380700 kubelet[2577]: I0317 18:19:58.380714 2577 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:19:58.393600 kubelet[2577]: I0317 18:19:58.393522 2577 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:19:58.393912 kubelet[2577]: I0317 18:19:58.393845 2577 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:19:58.394064 kubelet[2577]: I0317 18:19:58.394032 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:19:58.399380 kubelet[2577]: E0317 18:19:58.399321 2577 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-13\" not found" Mar 17 18:19:58.400481 kubelet[2577]: I0317 18:19:58.400430 2577 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-13" Mar 17 18:19:58.401327 kubelet[2577]: E0317 18:19:58.401266 2577 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.13:6443/api/v1/nodes\": dial tcp 172.31.23.13:6443: connect: connection refused" node="ip-172-31-23-13" Mar 17 18:19:58.471575 kubelet[2577]: I0317 18:19:58.469581 2577 topology_manager.go:215] "Topology Admit Handler" podUID="4aedf9537a6771a16ad8112a9dc8dd44" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-13" Mar 17 18:19:58.473830 kubelet[2577]: I0317 18:19:58.472909 2577 topology_manager.go:215] "Topology Admit Handler" podUID="d5abe9d8fdc09271a62cdd89054e48ce" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-13" Mar 17 18:19:58.475400 kubelet[2577]: I0317 18:19:58.475350 2577 topology_manager.go:215] "Topology Admit Handler" podUID="c43fd3e226b456cd3478a4a4bfdd1ea7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-13" Mar 17 18:19:58.501709 kubelet[2577]: I0317 18:19:58.501655 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:19:58.501907 kubelet[2577]: I0317 18:19:58.501721 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:19:58.501907 kubelet[2577]: I0317 18:19:58.501784 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:19:58.501907 kubelet[2577]: I0317 18:19:58.501825 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c43fd3e226b456cd3478a4a4bfdd1ea7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-13\" (UID: \"c43fd3e226b456cd3478a4a4bfdd1ea7\") " pod="kube-system/kube-scheduler-ip-172-31-23-13" Mar 17 18:19:58.501907 kubelet[2577]: I0317 18:19:58.501861 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4aedf9537a6771a16ad8112a9dc8dd44-ca-certs\") pod \"kube-apiserver-ip-172-31-23-13\" (UID: \"4aedf9537a6771a16ad8112a9dc8dd44\") " pod="kube-system/kube-apiserver-ip-172-31-23-13" Mar 17 18:19:58.501907 kubelet[2577]: I0317 18:19:58.501899 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:19:58.502236 kubelet[2577]: I0317 18:19:58.501936 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4aedf9537a6771a16ad8112a9dc8dd44-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-13\" (UID: \"4aedf9537a6771a16ad8112a9dc8dd44\") " pod="kube-system/kube-apiserver-ip-172-31-23-13" Mar 17 18:19:58.502236 kubelet[2577]: I0317 18:19:58.501975 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4aedf9537a6771a16ad8112a9dc8dd44-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-13\" (UID: \"4aedf9537a6771a16ad8112a9dc8dd44\") " pod="kube-system/kube-apiserver-ip-172-31-23-13" Mar 17 18:19:58.502236 kubelet[2577]: I0317 18:19:58.502015 2577 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:19:58.504723 kubelet[2577]: E0317 18:19:58.504658 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-13?timeout=10s\": dial tcp 172.31.23.13:6443: connect: connection refused" interval="400ms" Mar 17 18:19:58.604507 kubelet[2577]: I0317 18:19:58.604475 2577 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-13" Mar 17 18:19:58.605480 kubelet[2577]: E0317 18:19:58.605438 2577 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.13:6443/api/v1/nodes\": dial tcp 172.31.23.13:6443: connect: connection refused" node="ip-172-31-23-13" Mar 17 18:19:58.787810 env[1938]: time="2025-03-17T18:19:58.787419316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-13,Uid:4aedf9537a6771a16ad8112a9dc8dd44,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:58.787810 env[1938]: time="2025-03-17T18:19:58.787419436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-13,Uid:c43fd3e226b456cd3478a4a4bfdd1ea7,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:58.793188 env[1938]: time="2025-03-17T18:19:58.792785404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-13,Uid:d5abe9d8fdc09271a62cdd89054e48ce,Namespace:kube-system,Attempt:0,}" Mar 17 18:19:58.906327 kubelet[2577]: E0317 18:19:58.906250 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-13?timeout=10s\": dial tcp 172.31.23.13:6443: connect: connection refused" interval="800ms" Mar 17 18:19:59.007865 kubelet[2577]: I0317 18:19:59.007724 2577 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-13" Mar 17 18:19:59.008282 kubelet[2577]: E0317 18:19:59.008214 2577 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.13:6443/api/v1/nodes\": dial tcp 172.31.23.13:6443: connect: connection refused" node="ip-172-31-23-13" Mar 17 18:19:59.251449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291757153.mount: Deactivated successfully. Mar 17 18:19:59.260953 env[1938]: time="2025-03-17T18:19:59.260884954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.262592 env[1938]: time="2025-03-17T18:19:59.262522034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.267431 env[1938]: time="2025-03-17T18:19:59.267369767Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.271385 env[1938]: time="2025-03-17T18:19:59.271331308Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.275019 env[1938]: time="2025-03-17T18:19:59.274961791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.277895 env[1938]: time="2025-03-17T18:19:59.277839107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.281690 env[1938]: time="2025-03-17T18:19:59.281620955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.286706 env[1938]: time="2025-03-17T18:19:59.286644576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.290437 kubelet[2577]: W0317 18:19:59.290336 2577 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:59.290437 kubelet[2577]: E0317 18:19:59.290403 2577 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:59.296379 env[1938]: time="2025-03-17T18:19:59.296313518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.298602 env[1938]: time="2025-03-17T18:19:59.298543277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.302154 env[1938]: time="2025-03-17T18:19:59.302088542Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.303867 env[1938]: time="2025-03-17T18:19:59.303805655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:19:59.364789 env[1938]: time="2025-03-17T18:19:59.364434158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:59.364789 env[1938]: time="2025-03-17T18:19:59.364519509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:59.364789 env[1938]: time="2025-03-17T18:19:59.364546539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:59.365404 env[1938]: time="2025-03-17T18:19:59.365312825Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ae254889c397f5ba58ee890b88bb01c6de5e147e9df327942930d0a2077f8c8 pid=2626 runtime=io.containerd.runc.v2 Mar 17 18:19:59.373346 env[1938]: time="2025-03-17T18:19:59.370615928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:59.373346 env[1938]: time="2025-03-17T18:19:59.370691425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:59.373346 env[1938]: time="2025-03-17T18:19:59.370717266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:59.373346 env[1938]: time="2025-03-17T18:19:59.371393449Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b0776c4ab25f786a8789406eca42dcf046198f0dc5c91c4c867c4bbcd13ed24 pid=2640 runtime=io.containerd.runc.v2 Mar 17 18:19:59.391499 env[1938]: time="2025-03-17T18:19:59.391076852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:19:59.391499 env[1938]: time="2025-03-17T18:19:59.391171782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:19:59.391499 env[1938]: time="2025-03-17T18:19:59.391198656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:19:59.392339 env[1938]: time="2025-03-17T18:19:59.392180426Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2c73bd74ebd30ebc4395dfc02243d0013e2ae1a1c86747298eb5a5a10303dd97 pid=2637 runtime=io.containerd.runc.v2 Mar 17 18:19:59.540464 kubelet[2577]: W0317 18:19:59.540212 2577 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:59.540464 kubelet[2577]: E0317 18:19:59.540392 2577 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:59.578932 env[1938]: time="2025-03-17T18:19:59.578865764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-13,Uid:d5abe9d8fdc09271a62cdd89054e48ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae254889c397f5ba58ee890b88bb01c6de5e147e9df327942930d0a2077f8c8\"" Mar 17 18:19:59.591679 env[1938]: time="2025-03-17T18:19:59.591593525Z" level=info msg="CreateContainer within sandbox \"5ae254889c397f5ba58ee890b88bb01c6de5e147e9df327942930d0a2077f8c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:19:59.595486 kubelet[2577]: W0317 18:19:59.595361 2577 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-13&limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:59.595699 kubelet[2577]: E0317 18:19:59.595517 2577 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-13&limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:59.596183 env[1938]: time="2025-03-17T18:19:59.596112410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-13,Uid:4aedf9537a6771a16ad8112a9dc8dd44,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b0776c4ab25f786a8789406eca42dcf046198f0dc5c91c4c867c4bbcd13ed24\"" Mar 17 18:19:59.609542 env[1938]: time="2025-03-17T18:19:59.609213406Z" level=info msg="CreateContainer within sandbox \"1b0776c4ab25f786a8789406eca42dcf046198f0dc5c91c4c867c4bbcd13ed24\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:19:59.624982 env[1938]: time="2025-03-17T18:19:59.624905743Z" level=info msg="CreateContainer within sandbox \"5ae254889c397f5ba58ee890b88bb01c6de5e147e9df327942930d0a2077f8c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f2d0f7033f15b8f4f13a7d395bc50d113f82622c833cbc339e4ba1b94ad6c848\"" Mar 17 18:19:59.626566 env[1938]: time="2025-03-17T18:19:59.626479665Z" level=info msg="StartContainer for \"f2d0f7033f15b8f4f13a7d395bc50d113f82622c833cbc339e4ba1b94ad6c848\"" Mar 17 18:19:59.637947 env[1938]: time="2025-03-17T18:19:59.637868997Z" level=info msg="CreateContainer within sandbox \"1b0776c4ab25f786a8789406eca42dcf046198f0dc5c91c4c867c4bbcd13ed24\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7a1cae28ff8aad5ab3d17a900fd19504ee0b106c9ee55047d85a098b0ffe4f39\"" Mar 17 18:19:59.640055 env[1938]: time="2025-03-17T18:19:59.639494206Z" level=info msg="StartContainer for \"7a1cae28ff8aad5ab3d17a900fd19504ee0b106c9ee55047d85a098b0ffe4f39\"" Mar 17 18:19:59.644102 env[1938]: time="2025-03-17T18:19:59.644032895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-13,Uid:c43fd3e226b456cd3478a4a4bfdd1ea7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c73bd74ebd30ebc4395dfc02243d0013e2ae1a1c86747298eb5a5a10303dd97\"" Mar 17 18:19:59.650704 env[1938]: time="2025-03-17T18:19:59.650638827Z" level=info msg="CreateContainer within sandbox \"2c73bd74ebd30ebc4395dfc02243d0013e2ae1a1c86747298eb5a5a10303dd97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:19:59.682382 env[1938]: time="2025-03-17T18:19:59.682303031Z" level=info msg="CreateContainer within sandbox \"2c73bd74ebd30ebc4395dfc02243d0013e2ae1a1c86747298eb5a5a10303dd97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1da3c9c5f3f42507da09372a2cbf6c21b65834288e64979121ab023f77935668\"" Mar 17 18:19:59.683867 env[1938]: time="2025-03-17T18:19:59.683801384Z" level=info msg="StartContainer for \"1da3c9c5f3f42507da09372a2cbf6c21b65834288e64979121ab023f77935668\"" Mar 17 18:19:59.724796 kubelet[2577]: E0317 18:19:59.711060 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-13?timeout=10s\": dial tcp 172.31.23.13:6443: connect: connection refused" interval="1.6s" Mar 17 18:19:59.732802 kubelet[2577]: W0317 18:19:59.732297 2577 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:59.732802 kubelet[2577]: E0317 18:19:59.732413 2577 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.13:6443: connect: connection refused Mar 17 18:19:59.813924 kubelet[2577]: I0317 18:19:59.813778 2577 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-13" Mar 17 18:19:59.815429 kubelet[2577]: E0317 18:19:59.815351 2577 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.13:6443/api/v1/nodes\": dial tcp 172.31.23.13:6443: connect: connection refused" node="ip-172-31-23-13" Mar 17 18:19:59.850824 env[1938]: time="2025-03-17T18:19:59.850718879Z" level=info msg="StartContainer for \"f2d0f7033f15b8f4f13a7d395bc50d113f82622c833cbc339e4ba1b94ad6c848\" returns successfully" Mar 17 18:19:59.912812 env[1938]: time="2025-03-17T18:19:59.911275581Z" level=info msg="StartContainer for \"7a1cae28ff8aad5ab3d17a900fd19504ee0b106c9ee55047d85a098b0ffe4f39\" returns successfully" Mar 17 18:19:59.929308 env[1938]: time="2025-03-17T18:19:59.929236002Z" level=info msg="StartContainer for \"1da3c9c5f3f42507da09372a2cbf6c21b65834288e64979121ab023f77935668\" returns successfully" Mar 17 18:20:00.728386 amazon-ssm-agent[1906]: 2025-03-17 18:20:00 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Mar 17 18:20:01.418377 kubelet[2577]: I0317 18:20:01.418319 2577 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-13" Mar 17 18:20:04.727904 kubelet[2577]: E0317 18:20:04.727824 2577 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-13\" not found" node="ip-172-31-23-13" Mar 17 18:20:04.794626 kubelet[2577]: I0317 18:20:04.794554 2577 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-13" Mar 17 18:20:05.272352 kubelet[2577]: I0317 18:20:05.272312 2577 apiserver.go:52] "Watching apiserver" Mar 17 18:20:05.299012 kubelet[2577]: I0317 18:20:05.298968 2577 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:20:06.483687 update_engine[1926]: I0317 18:20:06.483140 1926 update_attempter.cc:509] Updating boot flags... Mar 17 18:20:07.113975 systemd[1]: Reloading. Mar 17 18:20:07.322132 /usr/lib/systemd/system-generators/torcx-generator[3046]: time="2025-03-17T18:20:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:20:07.322196 /usr/lib/systemd/system-generators/torcx-generator[3046]: time="2025-03-17T18:20:07Z" level=info msg="torcx already run" Mar 17 18:20:07.554591 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:20:07.554632 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:20:07.603597 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:20:07.908732 systemd[1]: Stopping kubelet.service... Mar 17 18:20:07.934965 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:20:07.935644 systemd[1]: Stopped kubelet.service. Mar 17 18:20:07.940323 systemd[1]: Starting kubelet.service... Mar 17 18:20:08.216716 systemd[1]: Started kubelet.service. Mar 17 18:20:08.354842 sudo[3128]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:20:08.356050 sudo[3128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:20:08.380968 kubelet[3116]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:20:08.380968 kubelet[3116]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:20:08.380968 kubelet[3116]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:20:08.381628 kubelet[3116]: I0317 18:20:08.381068 3116 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:20:08.405816 kubelet[3116]: I0317 18:20:08.405760 3116 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:20:08.405816 kubelet[3116]: I0317 18:20:08.405807 3116 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:20:08.406176 kubelet[3116]: I0317 18:20:08.406140 3116 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:20:08.408908 kubelet[3116]: I0317 18:20:08.408861 3116 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:20:08.411461 kubelet[3116]: I0317 18:20:08.411408 3116 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:20:08.423497 kubelet[3116]: I0317 18:20:08.423445 3116 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:20:08.424925 kubelet[3116]: I0317 18:20:08.424795 3116 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:20:08.425210 kubelet[3116]: I0317 18:20:08.424920 3116 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:20:08.425388 kubelet[3116]: I0317 18:20:08.425218 3116 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:20:08.425388 kubelet[3116]: I0317 18:20:08.425240 3116 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:20:08.425388 kubelet[3116]: I0317 18:20:08.425301 3116 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:20:08.425585 kubelet[3116]: I0317 18:20:08.425472 3116 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:20:08.425585 kubelet[3116]: I0317 18:20:08.425495 3116 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:20:08.425585 kubelet[3116]: I0317 18:20:08.425548 3116 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:20:08.425585 kubelet[3116]: I0317 18:20:08.425576 3116 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:20:08.427535 kubelet[3116]: I0317 18:20:08.427480 3116 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:20:08.427861 kubelet[3116]: I0317 18:20:08.427821 3116 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:20:08.428614 kubelet[3116]: I0317 18:20:08.428564 3116 server.go:1264] "Started kubelet" Mar 17 18:20:08.457257 kubelet[3116]: I0317 18:20:08.457202 3116 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:20:08.461637 kubelet[3116]: I0317 18:20:08.461544 3116 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:20:08.462097 kubelet[3116]: I0317 18:20:08.462051 3116 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:20:08.462638 kubelet[3116]: I0317 18:20:08.462608 3116 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:20:08.477048 kubelet[3116]: I0317 18:20:08.476922 3116 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:20:08.500894 kubelet[3116]: I0317 18:20:08.498673 3116 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:20:08.500894 kubelet[3116]: I0317 18:20:08.499623 3116 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:20:08.500894 kubelet[3116]: I0317 18:20:08.500027 3116 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:20:08.510053 kubelet[3116]: I0317 18:20:08.509555 3116 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:20:08.511855 kubelet[3116]: I0317 18:20:08.511797 3116 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:20:08.515822 kubelet[3116]: I0317 18:20:08.515773 3116 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:20:08.535048 kubelet[3116]: I0317 18:20:08.534975 3116 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:20:08.538936 kubelet[3116]: I0317 18:20:08.538866 3116 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:20:08.539084 kubelet[3116]: I0317 18:20:08.538948 3116 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:20:08.539084 kubelet[3116]: I0317 18:20:08.538994 3116 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:20:08.539239 kubelet[3116]: E0317 18:20:08.539069 3116 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:20:08.613826 kubelet[3116]: I0317 18:20:08.613789 3116 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-13" Mar 17 18:20:08.628023 kubelet[3116]: I0317 18:20:08.627980 3116 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-13" Mar 17 18:20:08.628406 kubelet[3116]: I0317 18:20:08.628375 3116 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-13" Mar 17 18:20:08.644823 kubelet[3116]: E0317 18:20:08.642616 3116 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:20:08.819365 kubelet[3116]: I0317 18:20:08.819256 3116 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:20:08.819642 kubelet[3116]: I0317 18:20:08.819565 3116 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:20:08.819868 kubelet[3116]: I0317 18:20:08.819846 3116 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:20:08.820221 kubelet[3116]: I0317 18:20:08.820197 3116 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:20:08.820376 kubelet[3116]: I0317 18:20:08.820332 3116 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:20:08.820490 kubelet[3116]: I0317 18:20:08.820471 3116 policy_none.go:49] "None policy: Start" Mar 17 18:20:08.822482 kubelet[3116]: I0317 18:20:08.822438 3116 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:20:08.822622 kubelet[3116]: I0317 18:20:08.822491 3116 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:20:08.823101 kubelet[3116]: I0317 18:20:08.823062 3116 state_mem.go:75] "Updated machine memory state" Mar 17 18:20:08.826797 kubelet[3116]: I0317 18:20:08.826730 3116 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:20:08.827109 kubelet[3116]: I0317 18:20:08.827044 3116 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:20:08.832408 kubelet[3116]: I0317 18:20:08.832095 3116 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:20:08.844071 kubelet[3116]: I0317 18:20:08.844006 3116 topology_manager.go:215] "Topology Admit Handler" podUID="4aedf9537a6771a16ad8112a9dc8dd44" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-13" Mar 17 18:20:08.844268 kubelet[3116]: I0317 18:20:08.844203 3116 topology_manager.go:215] "Topology Admit Handler" podUID="d5abe9d8fdc09271a62cdd89054e48ce" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-13" Mar 17 18:20:08.844336 kubelet[3116]: I0317 18:20:08.844301 3116 topology_manager.go:215] "Topology Admit Handler" podUID="c43fd3e226b456cd3478a4a4bfdd1ea7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-13" Mar 17 18:20:08.930471 kubelet[3116]: I0317 18:20:08.930369 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4aedf9537a6771a16ad8112a9dc8dd44-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-13\" (UID: \"4aedf9537a6771a16ad8112a9dc8dd44\") " pod="kube-system/kube-apiserver-ip-172-31-23-13" Mar 17 18:20:08.930633 kubelet[3116]: I0317 18:20:08.930483 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:20:08.930633 kubelet[3116]: I0317 18:20:08.930530 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:20:08.930633 kubelet[3116]: I0317 18:20:08.930572 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:20:08.930633 kubelet[3116]: I0317 18:20:08.930612 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4aedf9537a6771a16ad8112a9dc8dd44-ca-certs\") pod \"kube-apiserver-ip-172-31-23-13\" (UID: \"4aedf9537a6771a16ad8112a9dc8dd44\") " pod="kube-system/kube-apiserver-ip-172-31-23-13" Mar 17 18:20:08.930919 kubelet[3116]: I0317 18:20:08.930647 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4aedf9537a6771a16ad8112a9dc8dd44-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-13\" (UID: \"4aedf9537a6771a16ad8112a9dc8dd44\") " pod="kube-system/kube-apiserver-ip-172-31-23-13" Mar 17 18:20:08.930919 kubelet[3116]: I0317 18:20:08.930686 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:20:08.930919 kubelet[3116]: I0317 18:20:08.930721 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5abe9d8fdc09271a62cdd89054e48ce-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-13\" (UID: \"d5abe9d8fdc09271a62cdd89054e48ce\") " pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:20:08.931126 kubelet[3116]: I0317 18:20:08.930934 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c43fd3e226b456cd3478a4a4bfdd1ea7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-13\" (UID: \"c43fd3e226b456cd3478a4a4bfdd1ea7\") " pod="kube-system/kube-scheduler-ip-172-31-23-13" Mar 17 18:20:09.419652 sudo[3128]: pam_unix(sudo:session): session closed for user root Mar 17 18:20:09.426563 kubelet[3116]: I0317 18:20:09.426479 3116 apiserver.go:52] "Watching apiserver" Mar 17 18:20:09.500796 kubelet[3116]: I0317 18:20:09.500713 3116 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:20:09.620155 kubelet[3116]: E0317 18:20:09.620094 3116 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-13\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-13" Mar 17 18:20:09.653495 kubelet[3116]: I0317 18:20:09.653351 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-13" podStartSLOduration=1.6533291430000001 podStartE2EDuration="1.653329143s" podCreationTimestamp="2025-03-17 18:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:20:09.639825151 +0000 UTC m=+1.393704058" watchObservedRunningTime="2025-03-17 18:20:09.653329143 +0000 UTC m=+1.407208038" Mar 17 18:20:09.668983 kubelet[3116]: I0317 18:20:09.668884 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-13" podStartSLOduration=1.6688608299999999 podStartE2EDuration="1.66886083s" podCreationTimestamp="2025-03-17 18:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:20:09.654302497 +0000 UTC m=+1.408181404" watchObservedRunningTime="2025-03-17 18:20:09.66886083 +0000 UTC m=+1.422739725" Mar 17 18:20:09.690245 kubelet[3116]: I0317 18:20:09.690142 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-13" podStartSLOduration=1.690119186 podStartE2EDuration="1.690119186s" podCreationTimestamp="2025-03-17 18:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:20:09.670202968 +0000 UTC m=+1.424081863" watchObservedRunningTime="2025-03-17 18:20:09.690119186 +0000 UTC m=+1.443998105" Mar 17 18:20:13.257388 sudo[2206]: pam_unix(sudo:session): session closed for user root Mar 17 18:20:13.281036 sshd[2202]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:13.286818 systemd-logind[1925]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:20:13.287172 systemd[1]: sshd@4-172.31.23.13:22-139.178.89.65:37544.service: Deactivated successfully. Mar 17 18:20:13.288679 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:20:13.290266 systemd-logind[1925]: Removed session 5. Mar 17 18:20:20.349546 kubelet[3116]: I0317 18:20:20.349495 3116 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:20:20.351005 env[1938]: time="2025-03-17T18:20:20.350919871Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:20:20.351997 kubelet[3116]: I0317 18:20:20.351953 3116 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:20:21.279979 kubelet[3116]: I0317 18:20:21.279910 3116 topology_manager.go:215] "Topology Admit Handler" podUID="fc17e002-d206-415e-8c3c-95d60bb037b5" podNamespace="kube-system" podName="kube-proxy-7blns" Mar 17 18:20:21.316044 kubelet[3116]: I0317 18:20:21.315962 3116 topology_manager.go:215] "Topology Admit Handler" podUID="99ce7116-3aec-48db-bff9-f0fc1efc88e1" podNamespace="kube-system" podName="cilium-kjhbm" Mar 17 18:20:21.383136 kubelet[3116]: I0317 18:20:21.383067 3116 topology_manager.go:215] "Topology Admit Handler" podUID="6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5" podNamespace="kube-system" podName="cilium-operator-599987898-jlm2r" Mar 17 18:20:21.408974 kubelet[3116]: I0317 18:20:21.408920 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-run\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.409306 kubelet[3116]: I0317 18:20:21.409260 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-host-proc-sys-kernel\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.409586 kubelet[3116]: I0317 18:20:21.409541 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-hostproc\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.409913 kubelet[3116]: I0317 18:20:21.409862 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc17e002-d206-415e-8c3c-95d60bb037b5-xtables-lock\") pod \"kube-proxy-7blns\" (UID: \"fc17e002-d206-415e-8c3c-95d60bb037b5\") " pod="kube-system/kube-proxy-7blns" Mar 17 18:20:21.410229 kubelet[3116]: I0317 18:20:21.410180 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc17e002-d206-415e-8c3c-95d60bb037b5-lib-modules\") pod \"kube-proxy-7blns\" (UID: \"fc17e002-d206-415e-8c3c-95d60bb037b5\") " pod="kube-system/kube-proxy-7blns" Mar 17 18:20:21.410485 kubelet[3116]: I0317 18:20:21.410442 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7zh6\" (UniqueName: \"kubernetes.io/projected/fc17e002-d206-415e-8c3c-95d60bb037b5-kube-api-access-k7zh6\") pod \"kube-proxy-7blns\" (UID: \"fc17e002-d206-415e-8c3c-95d60bb037b5\") " pod="kube-system/kube-proxy-7blns" Mar 17 18:20:21.410771 kubelet[3116]: I0317 18:20:21.410706 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-bpf-maps\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.411072 kubelet[3116]: I0317 18:20:21.411010 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99ce7116-3aec-48db-bff9-f0fc1efc88e1-clustermesh-secrets\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.411332 kubelet[3116]: I0317 18:20:21.411294 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99ce7116-3aec-48db-bff9-f0fc1efc88e1-hubble-tls\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.411553 kubelet[3116]: I0317 18:20:21.411517 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-etc-cni-netd\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.411824 kubelet[3116]: I0317 18:20:21.411740 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-xtables-lock\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.412073 kubelet[3116]: I0317 18:20:21.412034 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc17e002-d206-415e-8c3c-95d60bb037b5-kube-proxy\") pod \"kube-proxy-7blns\" (UID: \"fc17e002-d206-415e-8c3c-95d60bb037b5\") " pod="kube-system/kube-proxy-7blns" Mar 17 18:20:21.412314 kubelet[3116]: I0317 18:20:21.412274 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cni-path\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.412529 kubelet[3116]: I0317 18:20:21.412492 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxxl\" (UniqueName: \"kubernetes.io/projected/99ce7116-3aec-48db-bff9-f0fc1efc88e1-kube-api-access-fpxxl\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.412782 kubelet[3116]: I0317 18:20:21.412709 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-cgroup\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.413010 kubelet[3116]: I0317 18:20:21.412971 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-lib-modules\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.413310 kubelet[3116]: I0317 18:20:21.413223 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-config-path\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.413589 kubelet[3116]: I0317 18:20:21.413547 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-host-proc-sys-net\") pod \"cilium-kjhbm\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " pod="kube-system/cilium-kjhbm" Mar 17 18:20:21.515002 kubelet[3116]: I0317 18:20:21.514943 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5-cilium-config-path\") pod \"cilium-operator-599987898-jlm2r\" (UID: \"6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5\") " pod="kube-system/cilium-operator-599987898-jlm2r" Mar 17 18:20:21.515342 kubelet[3116]: I0317 18:20:21.515310 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88nzr\" (UniqueName: \"kubernetes.io/projected/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5-kube-api-access-88nzr\") pod \"cilium-operator-599987898-jlm2r\" (UID: \"6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5\") " pod="kube-system/cilium-operator-599987898-jlm2r" Mar 17 18:20:21.598452 env[1938]: time="2025-03-17T18:20:21.597530525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7blns,Uid:fc17e002-d206-415e-8c3c-95d60bb037b5,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:21.627477 env[1938]: time="2025-03-17T18:20:21.627424076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjhbm,Uid:99ce7116-3aec-48db-bff9-f0fc1efc88e1,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:21.648840 env[1938]: time="2025-03-17T18:20:21.648692357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:21.649020 env[1938]: time="2025-03-17T18:20:21.648887000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:21.649020 env[1938]: time="2025-03-17T18:20:21.648969795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:21.649362 env[1938]: time="2025-03-17T18:20:21.649267551Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0704c7cd646a9d518ce2365e481de7b96c4682e9beecba82024ef1c919c7335c pid=3200 runtime=io.containerd.runc.v2 Mar 17 18:20:21.687824 env[1938]: time="2025-03-17T18:20:21.686102084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:21.687824 env[1938]: time="2025-03-17T18:20:21.686322878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:21.687824 env[1938]: time="2025-03-17T18:20:21.686586815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:21.687824 env[1938]: time="2025-03-17T18:20:21.687507193Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449 pid=3226 runtime=io.containerd.runc.v2 Mar 17 18:20:21.704796 env[1938]: time="2025-03-17T18:20:21.703320101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jlm2r,Uid:6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:21.750441 env[1938]: time="2025-03-17T18:20:21.750374208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7blns,Uid:fc17e002-d206-415e-8c3c-95d60bb037b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0704c7cd646a9d518ce2365e481de7b96c4682e9beecba82024ef1c919c7335c\"" Mar 17 18:20:21.772808 env[1938]: time="2025-03-17T18:20:21.767021423Z" level=info msg="CreateContainer within sandbox \"0704c7cd646a9d518ce2365e481de7b96c4682e9beecba82024ef1c919c7335c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:20:21.786252 env[1938]: time="2025-03-17T18:20:21.785264430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:21.786252 env[1938]: time="2025-03-17T18:20:21.785611402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:21.786252 env[1938]: time="2025-03-17T18:20:21.785976279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:21.788795 env[1938]: time="2025-03-17T18:20:21.786552745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d pid=3269 runtime=io.containerd.runc.v2 Mar 17 18:20:21.829778 env[1938]: time="2025-03-17T18:20:21.821251864Z" level=info msg="CreateContainer within sandbox \"0704c7cd646a9d518ce2365e481de7b96c4682e9beecba82024ef1c919c7335c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95fe37a90518d13eca4612f0fd621730189e395530a9225429e8726af89c8330\"" Mar 17 18:20:21.829778 env[1938]: time="2025-03-17T18:20:21.825806276Z" level=info msg="StartContainer for \"95fe37a90518d13eca4612f0fd621730189e395530a9225429e8726af89c8330\"" Mar 17 18:20:21.860311 env[1938]: time="2025-03-17T18:20:21.860091422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kjhbm,Uid:99ce7116-3aec-48db-bff9-f0fc1efc88e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\"" Mar 17 18:20:21.871241 env[1938]: time="2025-03-17T18:20:21.871167407Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:20:21.928707 env[1938]: time="2025-03-17T18:20:21.928652124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jlm2r,Uid:6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\"" Mar 17 18:20:21.974173 env[1938]: time="2025-03-17T18:20:21.974091010Z" level=info msg="StartContainer for \"95fe37a90518d13eca4612f0fd621730189e395530a9225429e8726af89c8330\" returns successfully" Mar 17 18:20:22.657080 kubelet[3116]: I0317 18:20:22.656928 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7blns" podStartSLOduration=1.6568772360000001 podStartE2EDuration="1.656877236s" podCreationTimestamp="2025-03-17 18:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:20:22.653731854 +0000 UTC m=+14.407610737" watchObservedRunningTime="2025-03-17 18:20:22.656877236 +0000 UTC m=+14.410756119" Mar 17 18:20:28.406691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2240545402.mount: Deactivated successfully. Mar 17 18:20:32.512204 env[1938]: time="2025-03-17T18:20:32.512123441Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:32.515833 env[1938]: time="2025-03-17T18:20:32.515767016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:32.518726 env[1938]: time="2025-03-17T18:20:32.518672466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:32.520059 env[1938]: time="2025-03-17T18:20:32.520012798Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:20:32.525455 env[1938]: time="2025-03-17T18:20:32.525391288Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:20:32.527539 env[1938]: time="2025-03-17T18:20:32.527043374Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:20:32.550309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649251825.mount: Deactivated successfully. Mar 17 18:20:32.567073 env[1938]: time="2025-03-17T18:20:32.566990455Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\"" Mar 17 18:20:32.568336 env[1938]: time="2025-03-17T18:20:32.568283864Z" level=info msg="StartContainer for \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\"" Mar 17 18:20:32.681554 env[1938]: time="2025-03-17T18:20:32.681487085Z" level=info msg="StartContainer for \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\" returns successfully" Mar 17 18:20:33.349007 env[1938]: time="2025-03-17T18:20:33.348930350Z" level=info msg="shim disconnected" id=2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168 Mar 17 18:20:33.349007 env[1938]: time="2025-03-17T18:20:33.349002726Z" level=warning msg="cleaning up after shim disconnected" id=2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168 namespace=k8s.io Mar 17 18:20:33.349421 env[1938]: time="2025-03-17T18:20:33.349199261Z" level=info msg="cleaning up dead shim" Mar 17 18:20:33.364833 env[1938]: time="2025-03-17T18:20:33.364729299Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:20:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3521 runtime=io.containerd.runc.v2\n" Mar 17 18:20:33.542011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168-rootfs.mount: Deactivated successfully. Mar 17 18:20:33.673316 env[1938]: time="2025-03-17T18:20:33.673082381Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:20:33.717118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222401353.mount: Deactivated successfully. Mar 17 18:20:33.721707 env[1938]: time="2025-03-17T18:20:33.721613974Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\"" Mar 17 18:20:33.724149 env[1938]: time="2025-03-17T18:20:33.722705771Z" level=info msg="StartContainer for \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\"" Mar 17 18:20:33.844616 env[1938]: time="2025-03-17T18:20:33.842233821Z" level=info msg="StartContainer for \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\" returns successfully" Mar 17 18:20:33.862204 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:20:33.863710 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:20:33.864514 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:20:33.869006 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:20:33.894456 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:20:33.915664 env[1938]: time="2025-03-17T18:20:33.915593196Z" level=info msg="shim disconnected" id=78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68 Mar 17 18:20:33.915664 env[1938]: time="2025-03-17T18:20:33.915667216Z" level=warning msg="cleaning up after shim disconnected" id=78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68 namespace=k8s.io Mar 17 18:20:33.916092 env[1938]: time="2025-03-17T18:20:33.915689861Z" level=info msg="cleaning up dead shim" Mar 17 18:20:33.932950 env[1938]: time="2025-03-17T18:20:33.932716778Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:20:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3585 runtime=io.containerd.runc.v2\n" Mar 17 18:20:34.545309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68-rootfs.mount: Deactivated successfully. Mar 17 18:20:34.682146 env[1938]: time="2025-03-17T18:20:34.682085587Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:20:34.738843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519522135.mount: Deactivated successfully. Mar 17 18:20:34.768990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534371664.mount: Deactivated successfully. Mar 17 18:20:34.779837 env[1938]: time="2025-03-17T18:20:34.779724391Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\"" Mar 17 18:20:34.780907 env[1938]: time="2025-03-17T18:20:34.780848576Z" level=info msg="StartContainer for \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\"" Mar 17 18:20:34.907462 env[1938]: time="2025-03-17T18:20:34.907303124Z" level=info msg="StartContainer for \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\" returns successfully" Mar 17 18:20:35.069679 env[1938]: time="2025-03-17T18:20:35.069611692Z" level=info msg="shim disconnected" id=5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c Mar 17 18:20:35.070087 env[1938]: time="2025-03-17T18:20:35.070051743Z" level=warning msg="cleaning up after shim disconnected" id=5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c namespace=k8s.io Mar 17 18:20:35.070212 env[1938]: time="2025-03-17T18:20:35.070184902Z" level=info msg="cleaning up dead shim" Mar 17 18:20:35.095917 env[1938]: time="2025-03-17T18:20:35.095851718Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:20:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3645 runtime=io.containerd.runc.v2\n" Mar 17 18:20:35.377266 env[1938]: time="2025-03-17T18:20:35.377208768Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:35.380479 env[1938]: time="2025-03-17T18:20:35.380428875Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:35.383130 env[1938]: time="2025-03-17T18:20:35.383077775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:20:35.384239 env[1938]: time="2025-03-17T18:20:35.384192730Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:20:35.392510 env[1938]: time="2025-03-17T18:20:35.392455636Z" level=info msg="CreateContainer within sandbox \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:20:35.407641 env[1938]: time="2025-03-17T18:20:35.407577349Z" level=info msg="CreateContainer within sandbox \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\"" Mar 17 18:20:35.409157 env[1938]: time="2025-03-17T18:20:35.409090497Z" level=info msg="StartContainer for \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\"" Mar 17 18:20:35.509144 env[1938]: time="2025-03-17T18:20:35.509061444Z" level=info msg="StartContainer for \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\" returns successfully" Mar 17 18:20:35.728319 env[1938]: time="2025-03-17T18:20:35.728239793Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:20:35.792025 env[1938]: time="2025-03-17T18:20:35.791964236Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\"" Mar 17 18:20:35.793021 env[1938]: time="2025-03-17T18:20:35.792891453Z" level=info msg="StartContainer for \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\"" Mar 17 18:20:35.832891 kubelet[3116]: I0317 18:20:35.831402 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-jlm2r" podStartSLOduration=1.376448076 podStartE2EDuration="14.831377792s" podCreationTimestamp="2025-03-17 18:20:21 +0000 UTC" firstStartedPulling="2025-03-17 18:20:21.931643592 +0000 UTC m=+13.685522487" lastFinishedPulling="2025-03-17 18:20:35.38657332 +0000 UTC m=+27.140452203" observedRunningTime="2025-03-17 18:20:35.722113024 +0000 UTC m=+27.475991967" watchObservedRunningTime="2025-03-17 18:20:35.831377792 +0000 UTC m=+27.585256675" Mar 17 18:20:36.061182 env[1938]: time="2025-03-17T18:20:36.061020482Z" level=info msg="StartContainer for \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\" returns successfully" Mar 17 18:20:36.157175 env[1938]: time="2025-03-17T18:20:36.157099020Z" level=info msg="shim disconnected" id=a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4 Mar 17 18:20:36.157545 env[1938]: time="2025-03-17T18:20:36.157507858Z" level=warning msg="cleaning up after shim disconnected" id=a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4 namespace=k8s.io Mar 17 18:20:36.157685 env[1938]: time="2025-03-17T18:20:36.157657265Z" level=info msg="cleaning up dead shim" Mar 17 18:20:36.229474 env[1938]: time="2025-03-17T18:20:36.229410987Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:20:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3738 runtime=io.containerd.runc.v2\n" Mar 17 18:20:36.546448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4-rootfs.mount: Deactivated successfully. Mar 17 18:20:36.732660 env[1938]: time="2025-03-17T18:20:36.732369769Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:20:36.782396 env[1938]: time="2025-03-17T18:20:36.778772658Z" level=info msg="CreateContainer within sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\"" Mar 17 18:20:36.780782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149666781.mount: Deactivated successfully. Mar 17 18:20:36.783168 env[1938]: time="2025-03-17T18:20:36.783114771Z" level=info msg="StartContainer for \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\"" Mar 17 18:20:37.097534 env[1938]: time="2025-03-17T18:20:37.097467955Z" level=info msg="StartContainer for \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\" returns successfully" Mar 17 18:20:37.562794 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:20:37.580544 kubelet[3116]: I0317 18:20:37.580482 3116 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:20:37.621698 kubelet[3116]: I0317 18:20:37.621624 3116 topology_manager.go:215] "Topology Admit Handler" podUID="a08c6339-2596-4844-a546-da24e90012ad" podNamespace="kube-system" podName="coredns-7db6d8ff4d-65gvt" Mar 17 18:20:37.636999 kubelet[3116]: I0317 18:20:37.636929 3116 topology_manager.go:215] "Topology Admit Handler" podUID="675ca912-0e02-475a-9d07-3a2d04e48f83" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8m2rq" Mar 17 18:20:37.760582 kubelet[3116]: I0317 18:20:37.760513 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpsn7\" (UniqueName: \"kubernetes.io/projected/a08c6339-2596-4844-a546-da24e90012ad-kube-api-access-cpsn7\") pod \"coredns-7db6d8ff4d-65gvt\" (UID: \"a08c6339-2596-4844-a546-da24e90012ad\") " pod="kube-system/coredns-7db6d8ff4d-65gvt" Mar 17 18:20:37.760795 kubelet[3116]: I0317 18:20:37.760594 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a08c6339-2596-4844-a546-da24e90012ad-config-volume\") pod \"coredns-7db6d8ff4d-65gvt\" (UID: \"a08c6339-2596-4844-a546-da24e90012ad\") " pod="kube-system/coredns-7db6d8ff4d-65gvt" Mar 17 18:20:37.760795 kubelet[3116]: I0317 18:20:37.760649 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fd6z\" (UniqueName: \"kubernetes.io/projected/675ca912-0e02-475a-9d07-3a2d04e48f83-kube-api-access-7fd6z\") pod \"coredns-7db6d8ff4d-8m2rq\" (UID: \"675ca912-0e02-475a-9d07-3a2d04e48f83\") " pod="kube-system/coredns-7db6d8ff4d-8m2rq" Mar 17 18:20:37.760795 kubelet[3116]: I0317 18:20:37.760688 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/675ca912-0e02-475a-9d07-3a2d04e48f83-config-volume\") pod \"coredns-7db6d8ff4d-8m2rq\" (UID: \"675ca912-0e02-475a-9d07-3a2d04e48f83\") " pod="kube-system/coredns-7db6d8ff4d-8m2rq" Mar 17 18:20:37.933320 env[1938]: time="2025-03-17T18:20:37.932592381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-65gvt,Uid:a08c6339-2596-4844-a546-da24e90012ad,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:37.944320 env[1938]: time="2025-03-17T18:20:37.943952277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8m2rq,Uid:675ca912-0e02-475a-9d07-3a2d04e48f83,Namespace:kube-system,Attempt:0,}" Mar 17 18:20:38.371809 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:20:40.200571 systemd-networkd[1604]: cilium_host: Link UP Mar 17 18:20:40.201663 systemd-networkd[1604]: cilium_net: Link UP Mar 17 18:20:40.201670 systemd-networkd[1604]: cilium_net: Gained carrier Mar 17 18:20:40.202021 systemd-networkd[1604]: cilium_host: Gained carrier Mar 17 18:20:40.207824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:20:40.210090 systemd-networkd[1604]: cilium_host: Gained IPv6LL Mar 17 18:20:40.211189 (udev-worker)[3859]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:40.214901 (udev-worker)[3900]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:40.334945 systemd-networkd[1604]: cilium_net: Gained IPv6LL Mar 17 18:20:40.378298 (udev-worker)[3910]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:40.387348 systemd-networkd[1604]: cilium_vxlan: Link UP Mar 17 18:20:40.387368 systemd-networkd[1604]: cilium_vxlan: Gained carrier Mar 17 18:20:40.871792 kernel: NET: Registered PF_ALG protocol family Mar 17 18:20:42.111974 systemd-networkd[1604]: cilium_vxlan: Gained IPv6LL Mar 17 18:20:42.274900 systemd-networkd[1604]: lxc_health: Link UP Mar 17 18:20:42.295008 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:20:42.294076 systemd-networkd[1604]: lxc_health: Gained carrier Mar 17 18:20:42.593433 systemd-networkd[1604]: lxc18539beb161b: Link UP Mar 17 18:20:42.606881 kernel: eth0: renamed from tmp63bb7 Mar 17 18:20:42.614975 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc18539beb161b: link becomes ready Mar 17 18:20:42.614983 systemd-networkd[1604]: lxc18539beb161b: Gained carrier Mar 17 18:20:42.629306 systemd-networkd[1604]: lxc641f63852a5a: Link UP Mar 17 18:20:42.642919 kernel: eth0: renamed from tmp6543a Mar 17 18:20:42.650355 (udev-worker)[3909]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:20:42.663731 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc641f63852a5a: link becomes ready Mar 17 18:20:42.660064 systemd-networkd[1604]: lxc641f63852a5a: Gained carrier Mar 17 18:20:43.671982 kubelet[3116]: I0317 18:20:43.671900 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kjhbm" podStartSLOduration=12.018810853 podStartE2EDuration="22.671877504s" podCreationTimestamp="2025-03-17 18:20:21 +0000 UTC" firstStartedPulling="2025-03-17 18:20:21.869790657 +0000 UTC m=+13.623669540" lastFinishedPulling="2025-03-17 18:20:32.522857248 +0000 UTC m=+24.276736191" observedRunningTime="2025-03-17 18:20:37.785269666 +0000 UTC m=+29.539148597" watchObservedRunningTime="2025-03-17 18:20:43.671877504 +0000 UTC m=+35.425756387" Mar 17 18:20:43.774934 systemd-networkd[1604]: lxc18539beb161b: Gained IPv6LL Mar 17 18:20:44.286930 systemd-networkd[1604]: lxc_health: Gained IPv6LL Mar 17 18:20:44.738036 systemd-networkd[1604]: lxc641f63852a5a: Gained IPv6LL Mar 17 18:20:51.075683 env[1938]: time="2025-03-17T18:20:51.075388263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:51.075683 env[1938]: time="2025-03-17T18:20:51.075546693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:51.075683 env[1938]: time="2025-03-17T18:20:51.075613296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:51.077691 env[1938]: time="2025-03-17T18:20:51.076873732Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6543a9b7f6284ba03965358c50bd379cc2ce8b3118a52b1c91ee6ccaafb08712 pid=4275 runtime=io.containerd.runc.v2 Mar 17 18:20:51.152672 systemd[1]: run-containerd-runc-k8s.io-6543a9b7f6284ba03965358c50bd379cc2ce8b3118a52b1c91ee6ccaafb08712-runc.wixh9X.mount: Deactivated successfully. Mar 17 18:20:51.262978 env[1938]: time="2025-03-17T18:20:51.254352978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:20:51.262978 env[1938]: time="2025-03-17T18:20:51.254423961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:20:51.262978 env[1938]: time="2025-03-17T18:20:51.254450038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:20:51.275384 env[1938]: time="2025-03-17T18:20:51.263503691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63bb79ce46b97600e4ca3b969e7ec9a7d07334c09344ffc8c83a1d21902f65c4 pid=4313 runtime=io.containerd.runc.v2 Mar 17 18:20:51.275384 env[1938]: time="2025-03-17T18:20:51.268649850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8m2rq,Uid:675ca912-0e02-475a-9d07-3a2d04e48f83,Namespace:kube-system,Attempt:0,} returns sandbox id \"6543a9b7f6284ba03965358c50bd379cc2ce8b3118a52b1c91ee6ccaafb08712\"" Mar 17 18:20:51.297126 env[1938]: time="2025-03-17T18:20:51.296670919Z" level=info msg="CreateContainer within sandbox \"6543a9b7f6284ba03965358c50bd379cc2ce8b3118a52b1c91ee6ccaafb08712\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:20:51.353004 env[1938]: time="2025-03-17T18:20:51.352838389Z" level=info msg="CreateContainer within sandbox \"6543a9b7f6284ba03965358c50bd379cc2ce8b3118a52b1c91ee6ccaafb08712\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55dfc30a70b29ed6afcae141b6ae0e1b97adeb49db49ac012cc647c526d1ae71\"" Mar 17 18:20:51.356084 env[1938]: time="2025-03-17T18:20:51.354480080Z" level=info msg="StartContainer for \"55dfc30a70b29ed6afcae141b6ae0e1b97adeb49db49ac012cc647c526d1ae71\"" Mar 17 18:20:51.502416 env[1938]: time="2025-03-17T18:20:51.502358862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-65gvt,Uid:a08c6339-2596-4844-a546-da24e90012ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"63bb79ce46b97600e4ca3b969e7ec9a7d07334c09344ffc8c83a1d21902f65c4\"" Mar 17 18:20:51.511788 env[1938]: time="2025-03-17T18:20:51.511686941Z" level=info msg="CreateContainer within sandbox \"63bb79ce46b97600e4ca3b969e7ec9a7d07334c09344ffc8c83a1d21902f65c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:20:51.557591 env[1938]: time="2025-03-17T18:20:51.557530591Z" level=info msg="StartContainer for \"55dfc30a70b29ed6afcae141b6ae0e1b97adeb49db49ac012cc647c526d1ae71\" returns successfully" Mar 17 18:20:51.558908 env[1938]: time="2025-03-17T18:20:51.558739568Z" level=info msg="CreateContainer within sandbox \"63bb79ce46b97600e4ca3b969e7ec9a7d07334c09344ffc8c83a1d21902f65c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26ee1d2593e6c0adeaf12ebed374f1f8c530dd959b13877c085ca699cbd21c08\"" Mar 17 18:20:51.563566 env[1938]: time="2025-03-17T18:20:51.563504940Z" level=info msg="StartContainer for \"26ee1d2593e6c0adeaf12ebed374f1f8c530dd959b13877c085ca699cbd21c08\"" Mar 17 18:20:51.707739 env[1938]: time="2025-03-17T18:20:51.707603398Z" level=info msg="StartContainer for \"26ee1d2593e6c0adeaf12ebed374f1f8c530dd959b13877c085ca699cbd21c08\" returns successfully" Mar 17 18:20:51.811886 kubelet[3116]: I0317 18:20:51.811799 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8m2rq" podStartSLOduration=30.811719932 podStartE2EDuration="30.811719932s" podCreationTimestamp="2025-03-17 18:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:20:51.810803286 +0000 UTC m=+43.564682217" watchObservedRunningTime="2025-03-17 18:20:51.811719932 +0000 UTC m=+43.565598863" Mar 17 18:20:51.860172 kubelet[3116]: I0317 18:20:51.860074 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-65gvt" podStartSLOduration=30.860052532 podStartE2EDuration="30.860052532s" podCreationTimestamp="2025-03-17 18:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:20:51.837198228 +0000 UTC m=+43.591077147" watchObservedRunningTime="2025-03-17 18:20:51.860052532 +0000 UTC m=+43.613931427" Mar 17 18:20:52.629181 amazon-ssm-agent[1906]: 2025-03-17 18:20:52 INFO [HealthCheck] HealthCheck reporting agent health. Mar 17 18:20:55.266347 systemd[1]: Started sshd@5-172.31.23.13:22-139.178.89.65:44910.service. Mar 17 18:20:55.443853 sshd[4437]: Accepted publickey for core from 139.178.89.65 port 44910 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:20:55.447104 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:20:55.456875 systemd[1]: Started session-6.scope. Mar 17 18:20:55.458602 systemd-logind[1925]: New session 6 of user core. Mar 17 18:20:55.723887 sshd[4437]: pam_unix(sshd:session): session closed for user core Mar 17 18:20:55.729909 systemd[1]: sshd@5-172.31.23.13:22-139.178.89.65:44910.service: Deactivated successfully. Mar 17 18:20:55.732463 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:20:55.732990 systemd-logind[1925]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:20:55.735166 systemd-logind[1925]: Removed session 6. Mar 17 18:21:00.752102 systemd[1]: Started sshd@6-172.31.23.13:22-139.178.89.65:44914.service. Mar 17 18:21:00.932393 sshd[4455]: Accepted publickey for core from 139.178.89.65 port 44914 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:00.935287 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:00.943462 systemd-logind[1925]: New session 7 of user core. Mar 17 18:21:00.944845 systemd[1]: Started session-7.scope. Mar 17 18:21:01.211211 sshd[4455]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:01.217021 systemd[1]: sshd@6-172.31.23.13:22-139.178.89.65:44914.service: Deactivated successfully. Mar 17 18:21:01.219155 systemd-logind[1925]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:21:01.219319 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:21:01.227436 systemd-logind[1925]: Removed session 7. Mar 17 18:21:06.238017 systemd[1]: Started sshd@7-172.31.23.13:22-139.178.89.65:54972.service. Mar 17 18:21:06.421313 sshd[4469]: Accepted publickey for core from 139.178.89.65 port 54972 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:06.424491 sshd[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:06.433319 systemd[1]: Started session-8.scope. Mar 17 18:21:06.434345 systemd-logind[1925]: New session 8 of user core. Mar 17 18:21:06.686675 sshd[4469]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:06.692282 systemd-logind[1925]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:21:06.694256 systemd[1]: sshd@7-172.31.23.13:22-139.178.89.65:54972.service: Deactivated successfully. Mar 17 18:21:06.696033 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:21:06.698675 systemd-logind[1925]: Removed session 8. Mar 17 18:21:11.713230 systemd[1]: Started sshd@8-172.31.23.13:22-139.178.89.65:41206.service. Mar 17 18:21:11.892154 sshd[4486]: Accepted publickey for core from 139.178.89.65 port 41206 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:11.894809 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:11.904489 systemd[1]: Started session-9.scope. Mar 17 18:21:11.904994 systemd-logind[1925]: New session 9 of user core. Mar 17 18:21:12.160097 sshd[4486]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:12.167086 systemd[1]: sshd@8-172.31.23.13:22-139.178.89.65:41206.service: Deactivated successfully. Mar 17 18:21:12.168601 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:21:12.171081 systemd-logind[1925]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:21:12.174055 systemd-logind[1925]: Removed session 9. Mar 17 18:21:17.184654 systemd[1]: Started sshd@9-172.31.23.13:22-139.178.89.65:41208.service. Mar 17 18:21:17.361641 sshd[4500]: Accepted publickey for core from 139.178.89.65 port 41208 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:17.364963 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:17.374164 systemd[1]: Started session-10.scope. Mar 17 18:21:17.374886 systemd-logind[1925]: New session 10 of user core. Mar 17 18:21:17.621410 sshd[4500]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:17.626498 systemd-logind[1925]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:21:17.627102 systemd[1]: sshd@9-172.31.23.13:22-139.178.89.65:41208.service: Deactivated successfully. Mar 17 18:21:17.628682 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:21:17.630359 systemd-logind[1925]: Removed session 10. Mar 17 18:21:17.648534 systemd[1]: Started sshd@10-172.31.23.13:22-139.178.89.65:41216.service. Mar 17 18:21:17.830196 sshd[4514]: Accepted publickey for core from 139.178.89.65 port 41216 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:17.833333 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:17.842636 systemd[1]: Started session-11.scope. Mar 17 18:21:17.843658 systemd-logind[1925]: New session 11 of user core. Mar 17 18:21:18.169577 sshd[4514]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:18.174916 systemd-logind[1925]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:21:18.177020 systemd[1]: sshd@10-172.31.23.13:22-139.178.89.65:41216.service: Deactivated successfully. Mar 17 18:21:18.179512 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:21:18.183345 systemd-logind[1925]: Removed session 11. Mar 17 18:21:18.204718 systemd[1]: Started sshd@11-172.31.23.13:22-139.178.89.65:41232.service. Mar 17 18:21:18.398806 sshd[4525]: Accepted publickey for core from 139.178.89.65 port 41232 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:18.398067 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:18.408664 systemd-logind[1925]: New session 12 of user core. Mar 17 18:21:18.409699 systemd[1]: Started session-12.scope. Mar 17 18:21:18.675073 sshd[4525]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:18.681349 systemd[1]: sshd@11-172.31.23.13:22-139.178.89.65:41232.service: Deactivated successfully. Mar 17 18:21:18.683350 systemd-logind[1925]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:21:18.683529 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:21:18.686557 systemd-logind[1925]: Removed session 12. Mar 17 18:21:23.699195 systemd[1]: Started sshd@12-172.31.23.13:22-139.178.89.65:39652.service. Mar 17 18:21:23.875682 sshd[4540]: Accepted publickey for core from 139.178.89.65 port 39652 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:23.878408 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:23.887690 systemd[1]: Started session-13.scope. Mar 17 18:21:23.888172 systemd-logind[1925]: New session 13 of user core. Mar 17 18:21:24.131593 sshd[4540]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:24.137081 systemd-logind[1925]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:21:24.137440 systemd[1]: sshd@12-172.31.23.13:22-139.178.89.65:39652.service: Deactivated successfully. Mar 17 18:21:24.139039 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:21:24.140257 systemd-logind[1925]: Removed session 13. Mar 17 18:21:29.157848 systemd[1]: Started sshd@13-172.31.23.13:22-139.178.89.65:39658.service. Mar 17 18:21:29.344787 sshd[4553]: Accepted publickey for core from 139.178.89.65 port 39658 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:29.347501 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:29.356851 systemd-logind[1925]: New session 14 of user core. Mar 17 18:21:29.357444 systemd[1]: Started session-14.scope. Mar 17 18:21:29.609917 sshd[4553]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:29.615384 systemd-logind[1925]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:21:29.615843 systemd[1]: sshd@13-172.31.23.13:22-139.178.89.65:39658.service: Deactivated successfully. Mar 17 18:21:29.617552 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:21:29.618643 systemd-logind[1925]: Removed session 14. Mar 17 18:21:34.636688 systemd[1]: Started sshd@14-172.31.23.13:22-139.178.89.65:43916.service. Mar 17 18:21:34.812241 sshd[4566]: Accepted publickey for core from 139.178.89.65 port 43916 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:34.815512 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:34.824310 systemd[1]: Started session-15.scope. Mar 17 18:21:34.824739 systemd-logind[1925]: New session 15 of user core. Mar 17 18:21:35.074688 sshd[4566]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:35.080435 systemd-logind[1925]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:21:35.082024 systemd[1]: sshd@14-172.31.23.13:22-139.178.89.65:43916.service: Deactivated successfully. Mar 17 18:21:35.083650 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:21:35.085570 systemd-logind[1925]: Removed session 15. Mar 17 18:21:35.102699 systemd[1]: Started sshd@15-172.31.23.13:22-139.178.89.65:43924.service. Mar 17 18:21:35.281843 sshd[4579]: Accepted publickey for core from 139.178.89.65 port 43924 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:35.284974 sshd[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:35.293867 systemd-logind[1925]: New session 16 of user core. Mar 17 18:21:35.294019 systemd[1]: Started session-16.scope. Mar 17 18:21:35.616328 sshd[4579]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:35.621246 systemd[1]: sshd@15-172.31.23.13:22-139.178.89.65:43924.service: Deactivated successfully. Mar 17 18:21:35.623670 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:21:35.625487 systemd-logind[1925]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:21:35.627921 systemd-logind[1925]: Removed session 16. Mar 17 18:21:35.642792 systemd[1]: Started sshd@16-172.31.23.13:22-139.178.89.65:43936.service. Mar 17 18:21:35.821465 sshd[4589]: Accepted publickey for core from 139.178.89.65 port 43936 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:35.824426 sshd[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:35.834299 systemd[1]: Started session-17.scope. Mar 17 18:21:35.834996 systemd-logind[1925]: New session 17 of user core. Mar 17 18:21:38.323650 sshd[4589]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:38.330533 systemd-logind[1925]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:21:38.332930 systemd[1]: sshd@16-172.31.23.13:22-139.178.89.65:43936.service: Deactivated successfully. Mar 17 18:21:38.334605 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:21:38.336528 systemd-logind[1925]: Removed session 17. Mar 17 18:21:38.347572 systemd[1]: Started sshd@17-172.31.23.13:22-139.178.89.65:43952.service. Mar 17 18:21:38.540653 sshd[4606]: Accepted publickey for core from 139.178.89.65 port 43952 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:38.545533 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:38.555651 systemd[1]: Started session-18.scope. Mar 17 18:21:38.556332 systemd-logind[1925]: New session 18 of user core. Mar 17 18:21:39.040728 sshd[4606]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:39.045555 systemd-logind[1925]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:21:39.047324 systemd[1]: sshd@17-172.31.23.13:22-139.178.89.65:43952.service: Deactivated successfully. Mar 17 18:21:39.049342 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:21:39.050408 systemd-logind[1925]: Removed session 18. Mar 17 18:21:39.066359 systemd[1]: Started sshd@18-172.31.23.13:22-139.178.89.65:43956.service. Mar 17 18:21:39.247776 sshd[4618]: Accepted publickey for core from 139.178.89.65 port 43956 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:39.249599 sshd[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:39.259736 systemd[1]: Started session-19.scope. Mar 17 18:21:39.260903 systemd-logind[1925]: New session 19 of user core. Mar 17 18:21:39.524156 sshd[4618]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:39.528830 systemd-logind[1925]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:21:39.529243 systemd[1]: sshd@18-172.31.23.13:22-139.178.89.65:43956.service: Deactivated successfully. Mar 17 18:21:39.532684 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:21:39.534784 systemd-logind[1925]: Removed session 19. Mar 17 18:21:44.549040 systemd[1]: Started sshd@19-172.31.23.13:22-139.178.89.65:47274.service. Mar 17 18:21:44.723022 sshd[4631]: Accepted publickey for core from 139.178.89.65 port 47274 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:44.726347 sshd[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:44.733980 systemd-logind[1925]: New session 20 of user core. Mar 17 18:21:44.735510 systemd[1]: Started session-20.scope. Mar 17 18:21:44.971861 sshd[4631]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:44.976853 systemd[1]: sshd@19-172.31.23.13:22-139.178.89.65:47274.service: Deactivated successfully. Mar 17 18:21:44.978933 systemd-logind[1925]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:21:44.979254 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:21:44.982238 systemd-logind[1925]: Removed session 20. Mar 17 18:21:49.999180 systemd[1]: Started sshd@20-172.31.23.13:22-139.178.89.65:47276.service. Mar 17 18:21:50.178583 sshd[4647]: Accepted publickey for core from 139.178.89.65 port 47276 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:50.181924 sshd[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:50.189849 systemd-logind[1925]: New session 21 of user core. Mar 17 18:21:50.191837 systemd[1]: Started session-21.scope. Mar 17 18:21:50.447645 sshd[4647]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:50.452565 systemd[1]: sshd@20-172.31.23.13:22-139.178.89.65:47276.service: Deactivated successfully. Mar 17 18:21:50.455369 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:21:50.455999 systemd-logind[1925]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:21:50.458414 systemd-logind[1925]: Removed session 21. Mar 17 18:21:55.474421 systemd[1]: Started sshd@21-172.31.23.13:22-139.178.89.65:37022.service. Mar 17 18:21:55.652483 sshd[4662]: Accepted publickey for core from 139.178.89.65 port 37022 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:21:55.655100 sshd[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:21:55.664165 systemd-logind[1925]: New session 22 of user core. Mar 17 18:21:55.664469 systemd[1]: Started session-22.scope. Mar 17 18:21:55.904794 sshd[4662]: pam_unix(sshd:session): session closed for user core Mar 17 18:21:55.910980 systemd[1]: sshd@21-172.31.23.13:22-139.178.89.65:37022.service: Deactivated successfully. Mar 17 18:21:55.912465 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:21:55.914870 systemd-logind[1925]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:21:55.917372 systemd-logind[1925]: Removed session 22. Mar 17 18:22:00.930133 systemd[1]: Started sshd@22-172.31.23.13:22-139.178.89.65:37034.service. Mar 17 18:22:01.107063 sshd[4676]: Accepted publickey for core from 139.178.89.65 port 37034 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:01.109670 sshd[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:01.117885 systemd-logind[1925]: New session 23 of user core. Mar 17 18:22:01.119625 systemd[1]: Started session-23.scope. Mar 17 18:22:01.373490 sshd[4676]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:01.379001 systemd-logind[1925]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:22:01.379506 systemd[1]: sshd@22-172.31.23.13:22-139.178.89.65:37034.service: Deactivated successfully. Mar 17 18:22:01.381029 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:22:01.383262 systemd-logind[1925]: Removed session 23. Mar 17 18:22:01.398181 systemd[1]: Started sshd@23-172.31.23.13:22-139.178.89.65:58206.service. Mar 17 18:22:01.572538 sshd[4689]: Accepted publickey for core from 139.178.89.65 port 58206 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:01.575250 sshd[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:01.585345 systemd[1]: Started session-24.scope. Mar 17 18:22:01.586078 systemd-logind[1925]: New session 24 of user core. Mar 17 18:22:03.804260 env[1938]: time="2025-03-17T18:22:03.803658780Z" level=info msg="StopContainer for \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\" with timeout 30 (s)" Mar 17 18:22:03.805373 env[1938]: time="2025-03-17T18:22:03.805307012Z" level=info msg="Stop container \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\" with signal terminated" Mar 17 18:22:03.807760 systemd[1]: run-containerd-runc-k8s.io-de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4-runc.fGTwTd.mount: Deactivated successfully. Mar 17 18:22:03.845271 env[1938]: time="2025-03-17T18:22:03.845193870Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:22:03.866922 kubelet[3116]: E0317 18:22:03.866807 3116 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:22:03.872367 env[1938]: time="2025-03-17T18:22:03.871453917Z" level=info msg="StopContainer for \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\" with timeout 2 (s)" Mar 17 18:22:03.877068 env[1938]: time="2025-03-17T18:22:03.876983171Z" level=info msg="Stop container \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\" with signal terminated" Mar 17 18:22:03.896151 systemd-networkd[1604]: lxc_health: Link DOWN Mar 17 18:22:03.896163 systemd-networkd[1604]: lxc_health: Lost carrier Mar 17 18:22:03.924349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300-rootfs.mount: Deactivated successfully. Mar 17 18:22:03.947550 env[1938]: time="2025-03-17T18:22:03.947482358Z" level=info msg="shim disconnected" id=ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300 Mar 17 18:22:03.947906 env[1938]: time="2025-03-17T18:22:03.947869978Z" level=warning msg="cleaning up after shim disconnected" id=ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300 namespace=k8s.io Mar 17 18:22:03.948032 env[1938]: time="2025-03-17T18:22:03.948003444Z" level=info msg="cleaning up dead shim" Mar 17 18:22:03.975133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4-rootfs.mount: Deactivated successfully. Mar 17 18:22:03.984640 env[1938]: time="2025-03-17T18:22:03.984542248Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4753 runtime=io.containerd.runc.v2\n" Mar 17 18:22:03.986409 env[1938]: time="2025-03-17T18:22:03.986335167Z" level=info msg="shim disconnected" id=de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4 Mar 17 18:22:03.986409 env[1938]: time="2025-03-17T18:22:03.986405140Z" level=warning msg="cleaning up after shim disconnected" id=de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4 namespace=k8s.io Mar 17 18:22:03.987123 env[1938]: time="2025-03-17T18:22:03.986428517Z" level=info msg="cleaning up dead shim" Mar 17 18:22:03.987641 env[1938]: time="2025-03-17T18:22:03.987593764Z" level=info msg="StopContainer for \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\" returns successfully" Mar 17 18:22:03.988717 env[1938]: time="2025-03-17T18:22:03.988672501Z" level=info msg="StopPodSandbox for \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\"" Mar 17 18:22:03.989177 env[1938]: time="2025-03-17T18:22:03.989137414Z" level=info msg="Container to stop \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:03.993557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d-shm.mount: Deactivated successfully. Mar 17 18:22:04.017089 env[1938]: time="2025-03-17T18:22:04.016662771Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4773 runtime=io.containerd.runc.v2\n" Mar 17 18:22:04.020157 env[1938]: time="2025-03-17T18:22:04.020081855Z" level=info msg="StopContainer for \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\" returns successfully" Mar 17 18:22:04.021217 env[1938]: time="2025-03-17T18:22:04.021152109Z" level=info msg="StopPodSandbox for \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\"" Mar 17 18:22:04.021382 env[1938]: time="2025-03-17T18:22:04.021251939Z" level=info msg="Container to stop \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:04.021382 env[1938]: time="2025-03-17T18:22:04.021283727Z" level=info msg="Container to stop \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:04.021382 env[1938]: time="2025-03-17T18:22:04.021311952Z" level=info msg="Container to stop \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:04.021382 env[1938]: time="2025-03-17T18:22:04.021354001Z" level=info msg="Container to stop \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:04.021831 env[1938]: time="2025-03-17T18:22:04.021381553Z" level=info msg="Container to stop \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:22:04.068835 env[1938]: time="2025-03-17T18:22:04.068637491Z" level=info msg="shim disconnected" id=5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d Mar 17 18:22:04.069509 env[1938]: time="2025-03-17T18:22:04.069465736Z" level=warning msg="cleaning up after shim disconnected" id=5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d namespace=k8s.io Mar 17 18:22:04.069807 env[1938]: time="2025-03-17T18:22:04.069777130Z" level=info msg="cleaning up dead shim" Mar 17 18:22:04.094308 env[1938]: time="2025-03-17T18:22:04.094238060Z" level=info msg="shim disconnected" id=4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449 Mar 17 18:22:04.094843 env[1938]: time="2025-03-17T18:22:04.094797943Z" level=warning msg="cleaning up after shim disconnected" id=4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449 namespace=k8s.io Mar 17 18:22:04.095005 env[1938]: time="2025-03-17T18:22:04.094974911Z" level=info msg="cleaning up dead shim" Mar 17 18:22:04.103414 env[1938]: time="2025-03-17T18:22:04.103353742Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4822 runtime=io.containerd.runc.v2\n" Mar 17 18:22:04.104243 env[1938]: time="2025-03-17T18:22:04.104194083Z" level=info msg="TearDown network for sandbox \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\" successfully" Mar 17 18:22:04.104414 env[1938]: time="2025-03-17T18:22:04.104379835Z" level=info msg="StopPodSandbox for \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\" returns successfully" Mar 17 18:22:04.121849 env[1938]: time="2025-03-17T18:22:04.121788883Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4843 runtime=io.containerd.runc.v2\n" Mar 17 18:22:04.122920 env[1938]: time="2025-03-17T18:22:04.122863865Z" level=info msg="TearDown network for sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" successfully" Mar 17 18:22:04.123123 env[1938]: time="2025-03-17T18:22:04.123088557Z" level=info msg="StopPodSandbox for \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" returns successfully" Mar 17 18:22:04.166294 kubelet[3116]: I0317 18:22:04.166253 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99ce7116-3aec-48db-bff9-f0fc1efc88e1-clustermesh-secrets\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.166565 kubelet[3116]: I0317 18:22:04.166538 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-etc-cni-netd\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.166711 kubelet[3116]: I0317 18:22:04.166687 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-run\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.166904 kubelet[3116]: I0317 18:22:04.166878 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-host-proc-sys-kernel\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.167038 kubelet[3116]: I0317 18:22:04.167014 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-bpf-maps\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.167173 kubelet[3116]: I0317 18:22:04.167149 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-lib-modules\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.167320 kubelet[3116]: I0317 18:22:04.167295 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99ce7116-3aec-48db-bff9-f0fc1efc88e1-hubble-tls\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.167466 kubelet[3116]: I0317 18:22:04.167442 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpxxl\" (UniqueName: \"kubernetes.io/projected/99ce7116-3aec-48db-bff9-f0fc1efc88e1-kube-api-access-fpxxl\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.167629 kubelet[3116]: I0317 18:22:04.167605 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-host-proc-sys-net\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.167803 kubelet[3116]: I0317 18:22:04.167777 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-hostproc\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.167960 kubelet[3116]: I0317 18:22:04.167935 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88nzr\" (UniqueName: \"kubernetes.io/projected/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5-kube-api-access-88nzr\") pod \"6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5\" (UID: \"6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5\") " Mar 17 18:22:04.168143 kubelet[3116]: I0317 18:22:04.168082 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5-cilium-config-path\") pod \"6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5\" (UID: \"6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5\") " Mar 17 18:22:04.168279 kubelet[3116]: I0317 18:22:04.168256 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cni-path\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.168414 kubelet[3116]: I0317 18:22:04.168389 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-xtables-lock\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.168549 kubelet[3116]: I0317 18:22:04.168525 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-cgroup\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.168679 kubelet[3116]: I0317 18:22:04.168655 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-config-path\") pod \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\" (UID: \"99ce7116-3aec-48db-bff9-f0fc1efc88e1\") " Mar 17 18:22:04.176005 kubelet[3116]: I0317 18:22:04.175860 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.179179 kubelet[3116]: I0317 18:22:04.176299 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-hostproc" (OuterVolumeSpecName: "hostproc") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.179179 kubelet[3116]: I0317 18:22:04.176887 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.179179 kubelet[3116]: I0317 18:22:04.176928 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.179179 kubelet[3116]: I0317 18:22:04.176959 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.179599 kubelet[3116]: I0317 18:22:04.176988 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.179599 kubelet[3116]: I0317 18:22:04.177012 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.179599 kubelet[3116]: I0317 18:22:04.179049 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:22:04.179599 kubelet[3116]: I0317 18:22:04.179288 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cni-path" (OuterVolumeSpecName: "cni-path") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.184938 kubelet[3116]: I0317 18:22:04.184883 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5" (UID: "6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:22:04.185215 kubelet[3116]: I0317 18:22:04.185169 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.185453 kubelet[3116]: I0317 18:22:04.185376 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:04.186233 kubelet[3116]: I0317 18:22:04.186191 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ce7116-3aec-48db-bff9-f0fc1efc88e1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:04.187327 kubelet[3116]: I0317 18:22:04.187280 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ce7116-3aec-48db-bff9-f0fc1efc88e1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:04.188058 kubelet[3116]: I0317 18:22:04.188009 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ce7116-3aec-48db-bff9-f0fc1efc88e1-kube-api-access-fpxxl" (OuterVolumeSpecName: "kube-api-access-fpxxl") pod "99ce7116-3aec-48db-bff9-f0fc1efc88e1" (UID: "99ce7116-3aec-48db-bff9-f0fc1efc88e1"). InnerVolumeSpecName "kube-api-access-fpxxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:04.191315 kubelet[3116]: I0317 18:22:04.191227 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5-kube-api-access-88nzr" (OuterVolumeSpecName: "kube-api-access-88nzr") pod "6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5" (UID: "6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5"). InnerVolumeSpecName "kube-api-access-88nzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:04.269725 kubelet[3116]: I0317 18:22:04.269675 3116 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-xtables-lock\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.269725 kubelet[3116]: I0317 18:22:04.269726 3116 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-cgroup\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270012 kubelet[3116]: I0317 18:22:04.269774 3116 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-config-path\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270012 kubelet[3116]: I0317 18:22:04.269802 3116 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/99ce7116-3aec-48db-bff9-f0fc1efc88e1-clustermesh-secrets\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270012 kubelet[3116]: I0317 18:22:04.269828 3116 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-etc-cni-netd\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270012 kubelet[3116]: I0317 18:22:04.269853 3116 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-lib-modules\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270012 kubelet[3116]: I0317 18:22:04.269873 3116 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cilium-run\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270012 kubelet[3116]: I0317 18:22:04.269896 3116 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-host-proc-sys-kernel\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270012 kubelet[3116]: I0317 18:22:04.269917 3116 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-bpf-maps\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270012 kubelet[3116]: I0317 18:22:04.269936 3116 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-fpxxl\" (UniqueName: \"kubernetes.io/projected/99ce7116-3aec-48db-bff9-f0fc1efc88e1-kube-api-access-fpxxl\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270478 kubelet[3116]: I0317 18:22:04.269956 3116 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/99ce7116-3aec-48db-bff9-f0fc1efc88e1-hubble-tls\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270478 kubelet[3116]: I0317 18:22:04.269976 3116 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-hostproc\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270478 kubelet[3116]: I0317 18:22:04.269997 3116 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-88nzr\" (UniqueName: \"kubernetes.io/projected/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5-kube-api-access-88nzr\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270478 kubelet[3116]: I0317 18:22:04.270018 3116 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-host-proc-sys-net\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270478 kubelet[3116]: I0317 18:22:04.270039 3116 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5-cilium-config-path\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.270478 kubelet[3116]: I0317 18:22:04.270058 3116 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/99ce7116-3aec-48db-bff9-f0fc1efc88e1-cni-path\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:04.787449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d-rootfs.mount: Deactivated successfully. Mar 17 18:22:04.787720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449-rootfs.mount: Deactivated successfully. Mar 17 18:22:04.787966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449-shm.mount: Deactivated successfully. Mar 17 18:22:04.788181 systemd[1]: var-lib-kubelet-pods-6e84bdc3\x2daeb3\x2d47fc\x2db7bb\x2dc0a4847515f5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d88nzr.mount: Deactivated successfully. Mar 17 18:22:04.788398 systemd[1]: var-lib-kubelet-pods-99ce7116\x2d3aec\x2d48db\x2dbff9\x2df0fc1efc88e1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:22:04.788623 systemd[1]: var-lib-kubelet-pods-99ce7116\x2d3aec\x2d48db\x2dbff9\x2df0fc1efc88e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfpxxl.mount: Deactivated successfully. Mar 17 18:22:04.788878 systemd[1]: var-lib-kubelet-pods-99ce7116\x2d3aec\x2d48db\x2dbff9\x2df0fc1efc88e1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:04.988306 kubelet[3116]: I0317 18:22:04.988269 3116 scope.go:117] "RemoveContainer" containerID="de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4" Mar 17 18:22:04.996775 env[1938]: time="2025-03-17T18:22:04.996275490Z" level=info msg="RemoveContainer for \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\"" Mar 17 18:22:05.004513 env[1938]: time="2025-03-17T18:22:05.004421266Z" level=info msg="RemoveContainer for \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\" returns successfully" Mar 17 18:22:05.007392 kubelet[3116]: I0317 18:22:05.007312 3116 scope.go:117] "RemoveContainer" containerID="a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4" Mar 17 18:22:05.016060 env[1938]: time="2025-03-17T18:22:05.015909963Z" level=info msg="RemoveContainer for \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\"" Mar 17 18:22:05.022114 env[1938]: time="2025-03-17T18:22:05.022015026Z" level=info msg="RemoveContainer for \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\" returns successfully" Mar 17 18:22:05.023234 kubelet[3116]: I0317 18:22:05.023175 3116 scope.go:117] "RemoveContainer" containerID="5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c" Mar 17 18:22:05.028558 env[1938]: time="2025-03-17T18:22:05.028478165Z" level=info msg="RemoveContainer for \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\"" Mar 17 18:22:05.035826 env[1938]: time="2025-03-17T18:22:05.035735864Z" level=info msg="RemoveContainer for \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\" returns successfully" Mar 17 18:22:05.036963 kubelet[3116]: I0317 18:22:05.036910 3116 scope.go:117] "RemoveContainer" containerID="78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68" Mar 17 18:22:05.041368 env[1938]: time="2025-03-17T18:22:05.041205323Z" level=info msg="RemoveContainer for \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\"" Mar 17 18:22:05.051348 env[1938]: time="2025-03-17T18:22:05.051263338Z" level=info msg="RemoveContainer for \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\" returns successfully" Mar 17 18:22:05.052371 kubelet[3116]: I0317 18:22:05.052166 3116 scope.go:117] "RemoveContainer" containerID="2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168" Mar 17 18:22:05.055622 env[1938]: time="2025-03-17T18:22:05.055506696Z" level=info msg="RemoveContainer for \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\"" Mar 17 18:22:05.061938 env[1938]: time="2025-03-17T18:22:05.061878789Z" level=info msg="RemoveContainer for \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\" returns successfully" Mar 17 18:22:05.063077 kubelet[3116]: I0317 18:22:05.062857 3116 scope.go:117] "RemoveContainer" containerID="de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4" Mar 17 18:22:05.063894 env[1938]: time="2025-03-17T18:22:05.063782316Z" level=error msg="ContainerStatus for \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\": not found" Mar 17 18:22:05.064816 kubelet[3116]: E0317 18:22:05.064330 3116 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\": not found" containerID="de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4" Mar 17 18:22:05.064816 kubelet[3116]: I0317 18:22:05.064411 3116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4"} err="failed to get container status \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"de6e771525749be47ee6a567a8136c921a06d755c57e84b8179a299dc06639c4\": not found" Mar 17 18:22:05.064816 kubelet[3116]: I0317 18:22:05.064600 3116 scope.go:117] "RemoveContainer" containerID="a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4" Mar 17 18:22:05.065337 env[1938]: time="2025-03-17T18:22:05.065120211Z" level=error msg="ContainerStatus for \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\": not found" Mar 17 18:22:05.065586 kubelet[3116]: E0317 18:22:05.065537 3116 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\": not found" containerID="a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4" Mar 17 18:22:05.065705 kubelet[3116]: I0317 18:22:05.065641 3116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4"} err="failed to get container status \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a34020588f5786694b3f91686c3bb466f88c82e35db2706ea15784b0054918c4\": not found" Mar 17 18:22:05.065705 kubelet[3116]: I0317 18:22:05.065691 3116 scope.go:117] "RemoveContainer" containerID="5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c" Mar 17 18:22:05.066167 env[1938]: time="2025-03-17T18:22:05.066053410Z" level=error msg="ContainerStatus for \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\": not found" Mar 17 18:22:05.066809 kubelet[3116]: E0317 18:22:05.066490 3116 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\": not found" containerID="5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c" Mar 17 18:22:05.066809 kubelet[3116]: I0317 18:22:05.066567 3116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c"} err="failed to get container status \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5df71f3350a858927f5b0ec44cb3240d23b953a08c1d5eee0bf1d5008851929c\": not found" Mar 17 18:22:05.066809 kubelet[3116]: I0317 18:22:05.066625 3116 scope.go:117] "RemoveContainer" containerID="78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68" Mar 17 18:22:05.067120 env[1938]: time="2025-03-17T18:22:05.067029317Z" level=error msg="ContainerStatus for \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\": not found" Mar 17 18:22:05.067329 kubelet[3116]: E0317 18:22:05.067286 3116 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\": not found" containerID="78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68" Mar 17 18:22:05.067438 kubelet[3116]: I0317 18:22:05.067341 3116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68"} err="failed to get container status \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\": rpc error: code = NotFound desc = an error occurred when try to find container \"78ebea0dc5f1bccfab4f245b70175ab0a77dcd0d46a4ddd73f98f926e36e1c68\": not found" Mar 17 18:22:05.067438 kubelet[3116]: I0317 18:22:05.067376 3116 scope.go:117] "RemoveContainer" containerID="2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168" Mar 17 18:22:05.067994 env[1938]: time="2025-03-17T18:22:05.067731920Z" level=error msg="ContainerStatus for \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\": not found" Mar 17 18:22:05.068208 kubelet[3116]: E0317 18:22:05.068169 3116 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\": not found" containerID="2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168" Mar 17 18:22:05.068302 kubelet[3116]: I0317 18:22:05.068219 3116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168"} err="failed to get container status \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b857b4a596bdaab89a9f4184c30e6eed5e912dd3d58ce756ac95ee4cbe60168\": not found" Mar 17 18:22:05.068302 kubelet[3116]: I0317 18:22:05.068259 3116 scope.go:117] "RemoveContainer" containerID="ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300" Mar 17 18:22:05.070186 env[1938]: time="2025-03-17T18:22:05.070133060Z" level=info msg="RemoveContainer for \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\"" Mar 17 18:22:05.074537 env[1938]: time="2025-03-17T18:22:05.074465604Z" level=info msg="RemoveContainer for \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\" returns successfully" Mar 17 18:22:05.075008 kubelet[3116]: I0317 18:22:05.074947 3116 scope.go:117] "RemoveContainer" containerID="ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300" Mar 17 18:22:05.075482 env[1938]: time="2025-03-17T18:22:05.075329513Z" level=error msg="ContainerStatus for \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\": not found" Mar 17 18:22:05.076017 kubelet[3116]: E0317 18:22:05.075965 3116 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\": not found" containerID="ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300" Mar 17 18:22:05.076174 kubelet[3116]: I0317 18:22:05.076024 3116 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300"} err="failed to get container status \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec00a3abb4f54ecd8be842c63c55052257cd7761f31f34e726de613a3bc3d300\": not found" Mar 17 18:22:05.684153 sshd[4689]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:05.689364 systemd[1]: sshd@23-172.31.23.13:22-139.178.89.65:58206.service: Deactivated successfully. Mar 17 18:22:05.690851 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:22:05.693158 systemd-logind[1925]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:22:05.695352 systemd-logind[1925]: Removed session 24. Mar 17 18:22:05.710571 systemd[1]: Started sshd@24-172.31.23.13:22-139.178.89.65:58210.service. Mar 17 18:22:05.888193 sshd[4863]: Accepted publickey for core from 139.178.89.65 port 58210 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:05.890989 sshd[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:05.898549 systemd-logind[1925]: New session 25 of user core. Mar 17 18:22:05.900267 systemd[1]: Started session-25.scope. Mar 17 18:22:06.539477 kubelet[3116]: E0317 18:22:06.539407 3116 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-65gvt" podUID="a08c6339-2596-4844-a546-da24e90012ad" Mar 17 18:22:06.544366 kubelet[3116]: I0317 18:22:06.544289 3116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5" path="/var/lib/kubelet/pods/6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5/volumes" Mar 17 18:22:06.545383 kubelet[3116]: I0317 18:22:06.545344 3116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ce7116-3aec-48db-bff9-f0fc1efc88e1" path="/var/lib/kubelet/pods/99ce7116-3aec-48db-bff9-f0fc1efc88e1/volumes" Mar 17 18:22:07.270511 sshd[4863]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:07.275877 kubelet[3116]: I0317 18:22:07.275824 3116 topology_manager.go:215] "Topology Admit Handler" podUID="31622b22-f00f-4101-8aee-9f87d21c6db3" podNamespace="kube-system" podName="cilium-q7vdp" Mar 17 18:22:07.276159 kubelet[3116]: E0317 18:22:07.276130 3116 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99ce7116-3aec-48db-bff9-f0fc1efc88e1" containerName="mount-cgroup" Mar 17 18:22:07.276289 kubelet[3116]: E0317 18:22:07.276266 3116 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99ce7116-3aec-48db-bff9-f0fc1efc88e1" containerName="clean-cilium-state" Mar 17 18:22:07.276409 kubelet[3116]: E0317 18:22:07.276387 3116 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99ce7116-3aec-48db-bff9-f0fc1efc88e1" containerName="cilium-agent" Mar 17 18:22:07.276540 kubelet[3116]: E0317 18:22:07.276516 3116 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99ce7116-3aec-48db-bff9-f0fc1efc88e1" containerName="apply-sysctl-overwrites" Mar 17 18:22:07.276662 kubelet[3116]: E0317 18:22:07.276641 3116 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99ce7116-3aec-48db-bff9-f0fc1efc88e1" containerName="mount-bpf-fs" Mar 17 18:22:07.276798 kubelet[3116]: E0317 18:22:07.276776 3116 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5" containerName="cilium-operator" Mar 17 18:22:07.276993 kubelet[3116]: I0317 18:22:07.276967 3116 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ce7116-3aec-48db-bff9-f0fc1efc88e1" containerName="cilium-agent" Mar 17 18:22:07.277138 kubelet[3116]: I0317 18:22:07.277112 3116 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e84bdc3-aeb3-47fc-b7bb-c0a4847515f5" containerName="cilium-operator" Mar 17 18:22:07.281793 systemd[1]: sshd@24-172.31.23.13:22-139.178.89.65:58210.service: Deactivated successfully. Mar 17 18:22:07.283286 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:22:07.286119 systemd-logind[1925]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:22:07.291975 systemd-logind[1925]: Removed session 25. Mar 17 18:22:07.301618 systemd[1]: Started sshd@25-172.31.23.13:22-139.178.89.65:58220.service. Mar 17 18:22:07.404068 kubelet[3116]: I0317 18:22:07.404006 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cni-path\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.404348 kubelet[3116]: I0317 18:22:07.404299 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-ipsec-secrets\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.404582 kubelet[3116]: I0317 18:22:07.404537 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-host-proc-sys-kernel\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.404809 kubelet[3116]: I0317 18:22:07.404779 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31622b22-f00f-4101-8aee-9f87d21c6db3-hubble-tls\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.405025 kubelet[3116]: I0317 18:22:07.404982 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-bpf-maps\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.405220 kubelet[3116]: I0317 18:22:07.405178 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-hostproc\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.405404 kubelet[3116]: I0317 18:22:07.405364 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-config-path\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.405555 kubelet[3116]: I0317 18:22:07.405529 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-run\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.405736 kubelet[3116]: I0317 18:22:07.405705 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-xtables-lock\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.405952 kubelet[3116]: I0317 18:22:07.405922 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31622b22-f00f-4101-8aee-9f87d21c6db3-clustermesh-secrets\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.406125 kubelet[3116]: I0317 18:22:07.406074 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-host-proc-sys-net\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.406346 kubelet[3116]: I0317 18:22:07.406310 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-etc-cni-netd\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.406600 kubelet[3116]: I0317 18:22:07.406548 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrg8\" (UniqueName: \"kubernetes.io/projected/31622b22-f00f-4101-8aee-9f87d21c6db3-kube-api-access-8rrg8\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.406860 kubelet[3116]: I0317 18:22:07.406818 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-lib-modules\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.407057 kubelet[3116]: I0317 18:22:07.407027 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-cgroup\") pod \"cilium-q7vdp\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " pod="kube-system/cilium-q7vdp" Mar 17 18:22:07.528159 sshd[4874]: Accepted publickey for core from 139.178.89.65 port 58220 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:07.522111 sshd[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:07.603619 systemd-logind[1925]: New session 26 of user core. Mar 17 18:22:07.607670 systemd[1]: Started session-26.scope. Mar 17 18:22:07.609666 env[1938]: time="2025-03-17T18:22:07.609344635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7vdp,Uid:31622b22-f00f-4101-8aee-9f87d21c6db3,Namespace:kube-system,Attempt:0,}" Mar 17 18:22:07.648547 env[1938]: time="2025-03-17T18:22:07.642952481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:22:07.648547 env[1938]: time="2025-03-17T18:22:07.643096424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:22:07.648547 env[1938]: time="2025-03-17T18:22:07.643124757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:22:07.648547 env[1938]: time="2025-03-17T18:22:07.644088245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4 pid=4889 runtime=io.containerd.runc.v2 Mar 17 18:22:07.728584 env[1938]: time="2025-03-17T18:22:07.728509881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q7vdp,Uid:31622b22-f00f-4101-8aee-9f87d21c6db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\"" Mar 17 18:22:07.738907 env[1938]: time="2025-03-17T18:22:07.738835459Z" level=info msg="CreateContainer within sandbox \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:22:07.762314 env[1938]: time="2025-03-17T18:22:07.762248039Z" level=info msg="CreateContainer within sandbox \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122\"" Mar 17 18:22:07.765889 env[1938]: time="2025-03-17T18:22:07.765827653Z" level=info msg="StartContainer for \"cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122\"" Mar 17 18:22:07.886801 env[1938]: time="2025-03-17T18:22:07.886652762Z" level=info msg="StartContainer for \"cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122\" returns successfully" Mar 17 18:22:07.964532 sshd[4874]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:07.983109 systemd[1]: sshd@25-172.31.23.13:22-139.178.89.65:58220.service: Deactivated successfully. Mar 17 18:22:07.984555 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:22:07.995903 systemd-logind[1925]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:22:08.001110 systemd[1]: Started sshd@26-172.31.23.13:22-139.178.89.65:58236.service. Mar 17 18:22:08.011868 systemd-logind[1925]: Removed session 26. Mar 17 18:22:08.016088 env[1938]: time="2025-03-17T18:22:08.016024798Z" level=info msg="shim disconnected" id=cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122 Mar 17 18:22:08.016548 env[1938]: time="2025-03-17T18:22:08.016509704Z" level=warning msg="cleaning up after shim disconnected" id=cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122 namespace=k8s.io Mar 17 18:22:08.016714 env[1938]: time="2025-03-17T18:22:08.016683264Z" level=info msg="cleaning up dead shim" Mar 17 18:22:08.024505 env[1938]: time="2025-03-17T18:22:08.024437466Z" level=info msg="StopContainer for \"cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122\" with timeout 2 (s)" Mar 17 18:22:08.039965 env[1938]: time="2025-03-17T18:22:08.039905836Z" level=info msg="StopContainer for \"cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122\" returns successfully" Mar 17 18:22:08.043781 env[1938]: time="2025-03-17T18:22:08.041047468Z" level=info msg="StopPodSandbox for \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\"" Mar 17 18:22:08.080926 env[1938]: time="2025-03-17T18:22:08.080871727Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4984 runtime=io.containerd.runc.v2\n" Mar 17 18:22:08.116511 env[1938]: time="2025-03-17T18:22:08.116445414Z" level=info msg="shim disconnected" id=02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4 Mar 17 18:22:08.116897 env[1938]: time="2025-03-17T18:22:08.116863130Z" level=warning msg="cleaning up after shim disconnected" id=02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4 namespace=k8s.io Mar 17 18:22:08.117044 env[1938]: time="2025-03-17T18:22:08.117011345Z" level=info msg="cleaning up dead shim" Mar 17 18:22:08.135338 env[1938]: time="2025-03-17T18:22:08.135283383Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5018 runtime=io.containerd.runc.v2\n" Mar 17 18:22:08.136171 env[1938]: time="2025-03-17T18:22:08.136098692Z" level=info msg="TearDown network for sandbox \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\" successfully" Mar 17 18:22:08.136343 env[1938]: time="2025-03-17T18:22:08.136308864Z" level=info msg="StopPodSandbox for \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\" returns successfully" Mar 17 18:22:08.207430 sshd[4983]: Accepted publickey for core from 139.178.89.65 port 58236 ssh2: RSA SHA256:azelU3G0DadBCmAXuAehsKOCz630heU8UfFnUiqM6ac Mar 17 18:22:08.210190 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:22:08.219229 systemd-logind[1925]: New session 27 of user core. Mar 17 18:22:08.219673 systemd[1]: Started session-27.scope. Mar 17 18:22:08.220901 kubelet[3116]: I0317 18:22:08.220860 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-cgroup\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.221898 kubelet[3116]: I0317 18:22:08.221865 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-host-proc-sys-kernel\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.223346 kubelet[3116]: I0317 18:22:08.223285 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31622b22-f00f-4101-8aee-9f87d21c6db3-clustermesh-secrets\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.223811 kubelet[3116]: I0317 18:22:08.221958 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.224655 kubelet[3116]: I0317 18:22:08.222496 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.225561 kubelet[3116]: I0317 18:22:08.224391 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.226393 kubelet[3116]: I0317 18:22:08.223949 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-host-proc-sys-net\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.227522 kubelet[3116]: I0317 18:22:08.227481 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rrg8\" (UniqueName: \"kubernetes.io/projected/31622b22-f00f-4101-8aee-9f87d21c6db3-kube-api-access-8rrg8\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.227794 kubelet[3116]: I0317 18:22:08.227725 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31622b22-f00f-4101-8aee-9f87d21c6db3-hubble-tls\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.227964 kubelet[3116]: I0317 18:22:08.227935 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-hostproc\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.229781 kubelet[3116]: I0317 18:22:08.229711 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-config-path\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.229971 kubelet[3116]: I0317 18:22:08.229943 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-bpf-maps\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.230120 kubelet[3116]: I0317 18:22:08.230095 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-ipsec-secrets\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.235292 kubelet[3116]: I0317 18:22:08.232686 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-xtables-lock\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.248865 kubelet[3116]: I0317 18:22:08.248817 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-etc-cni-netd\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.252055 kubelet[3116]: I0317 18:22:08.252006 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-run\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.252318 kubelet[3116]: I0317 18:22:08.252289 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-lib-modules\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.252474 kubelet[3116]: I0317 18:22:08.252450 3116 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cni-path\") pod \"31622b22-f00f-4101-8aee-9f87d21c6db3\" (UID: \"31622b22-f00f-4101-8aee-9f87d21c6db3\") " Mar 17 18:22:08.252642 kubelet[3116]: I0317 18:22:08.252616 3116 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-host-proc-sys-kernel\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.252813 kubelet[3116]: I0317 18:22:08.252790 3116 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-host-proc-sys-net\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.252958 kubelet[3116]: I0317 18:22:08.252936 3116 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-cgroup\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.253163 kubelet[3116]: I0317 18:22:08.236656 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.253304 kubelet[3116]: I0317 18:22:08.248133 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.253437 kubelet[3116]: I0317 18:22:08.248639 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-hostproc" (OuterVolumeSpecName: "hostproc") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.253554 kubelet[3116]: I0317 18:22:08.249007 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.253679 kubelet[3116]: I0317 18:22:08.249230 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31622b22-f00f-4101-8aee-9f87d21c6db3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:08.259538 kubelet[3116]: I0317 18:22:08.252199 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.259538 kubelet[3116]: I0317 18:22:08.253097 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.259538 kubelet[3116]: I0317 18:22:08.253788 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cni-path" (OuterVolumeSpecName: "cni-path") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:22:08.259538 kubelet[3116]: I0317 18:22:08.259424 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:22:08.265800 kubelet[3116]: I0317 18:22:08.265729 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:22:08.266650 kubelet[3116]: I0317 18:22:08.266589 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31622b22-f00f-4101-8aee-9f87d21c6db3-kube-api-access-8rrg8" (OuterVolumeSpecName: "kube-api-access-8rrg8") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "kube-api-access-8rrg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:08.266818 kubelet[3116]: I0317 18:22:08.266733 3116 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31622b22-f00f-4101-8aee-9f87d21c6db3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "31622b22-f00f-4101-8aee-9f87d21c6db3" (UID: "31622b22-f00f-4101-8aee-9f87d21c6db3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:22:08.354232 kubelet[3116]: I0317 18:22:08.354166 3116 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/31622b22-f00f-4101-8aee-9f87d21c6db3-clustermesh-secrets\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354232 kubelet[3116]: I0317 18:22:08.354226 3116 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8rrg8\" (UniqueName: \"kubernetes.io/projected/31622b22-f00f-4101-8aee-9f87d21c6db3-kube-api-access-8rrg8\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354479 kubelet[3116]: I0317 18:22:08.354252 3116 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/31622b22-f00f-4101-8aee-9f87d21c6db3-hubble-tls\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354479 kubelet[3116]: I0317 18:22:08.354275 3116 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-hostproc\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354479 kubelet[3116]: I0317 18:22:08.354299 3116 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-config-path\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354479 kubelet[3116]: I0317 18:22:08.354322 3116 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-bpf-maps\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354479 kubelet[3116]: I0317 18:22:08.354342 3116 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-ipsec-secrets\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354479 kubelet[3116]: I0317 18:22:08.354362 3116 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-xtables-lock\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354479 kubelet[3116]: I0317 18:22:08.354382 3116 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-etc-cni-netd\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354479 kubelet[3116]: I0317 18:22:08.354403 3116 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cilium-run\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354962 kubelet[3116]: I0317 18:22:08.354422 3116 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-lib-modules\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.354962 kubelet[3116]: I0317 18:22:08.354441 3116 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/31622b22-f00f-4101-8aee-9f87d21c6db3-cni-path\") on node \"ip-172-31-23-13\" DevicePath \"\"" Mar 17 18:22:08.522802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4-shm.mount: Deactivated successfully. Mar 17 18:22:08.523707 systemd[1]: var-lib-kubelet-pods-31622b22\x2df00f\x2d4101\x2d8aee\x2d9f87d21c6db3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8rrg8.mount: Deactivated successfully. Mar 17 18:22:08.524220 systemd[1]: var-lib-kubelet-pods-31622b22\x2df00f\x2d4101\x2d8aee\x2d9f87d21c6db3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:08.524614 systemd[1]: var-lib-kubelet-pods-31622b22\x2df00f\x2d4101\x2d8aee\x2d9f87d21c6db3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:22:08.525070 systemd[1]: var-lib-kubelet-pods-31622b22\x2df00f\x2d4101\x2d8aee\x2d9f87d21c6db3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:22:08.540980 kubelet[3116]: E0317 18:22:08.540905 3116 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-65gvt" podUID="a08c6339-2596-4844-a546-da24e90012ad" Mar 17 18:22:08.551822 kubelet[3116]: I0317 18:22:08.551714 3116 scope.go:117] "RemoveContainer" containerID="cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122" Mar 17 18:22:08.556700 env[1938]: time="2025-03-17T18:22:08.556634279Z" level=info msg="RemoveContainer for \"cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122\"" Mar 17 18:22:08.561547 env[1938]: time="2025-03-17T18:22:08.561480304Z" level=info msg="RemoveContainer for \"cf5034eb8254576bc6418fec1ec19f7289426422463132a04d1e970a5d7a7122\" returns successfully" Mar 17 18:22:08.564150 env[1938]: time="2025-03-17T18:22:08.564093251Z" level=info msg="StopPodSandbox for \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\"" Mar 17 18:22:08.564295 env[1938]: time="2025-03-17T18:22:08.564238706Z" level=info msg="TearDown network for sandbox \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\" successfully" Mar 17 18:22:08.564362 env[1938]: time="2025-03-17T18:22:08.564296091Z" level=info msg="StopPodSandbox for \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\" returns successfully" Mar 17 18:22:08.565238 env[1938]: time="2025-03-17T18:22:08.565185777Z" level=info msg="RemovePodSandbox for \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\"" Mar 17 18:22:08.565392 env[1938]: time="2025-03-17T18:22:08.565247111Z" level=info msg="Forcibly stopping sandbox \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\"" Mar 17 18:22:08.565392 env[1938]: time="2025-03-17T18:22:08.565373341Z" level=info msg="TearDown network for sandbox \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\" successfully" Mar 17 18:22:08.570645 env[1938]: time="2025-03-17T18:22:08.570552829Z" level=info msg="RemovePodSandbox \"5699b62350018df448c38cf8a92acb80c1b1053c807f0c40106838397022639d\" returns successfully" Mar 17 18:22:08.573770 env[1938]: time="2025-03-17T18:22:08.571813228Z" level=info msg="StopPodSandbox for \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\"" Mar 17 18:22:08.573770 env[1938]: time="2025-03-17T18:22:08.572233296Z" level=info msg="TearDown network for sandbox \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\" successfully" Mar 17 18:22:08.573770 env[1938]: time="2025-03-17T18:22:08.572294474Z" level=info msg="StopPodSandbox for \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\" returns successfully" Mar 17 18:22:08.575121 env[1938]: time="2025-03-17T18:22:08.575050871Z" level=info msg="RemovePodSandbox for \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\"" Mar 17 18:22:08.575396 env[1938]: time="2025-03-17T18:22:08.575326949Z" level=info msg="Forcibly stopping sandbox \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\"" Mar 17 18:22:08.575656 env[1938]: time="2025-03-17T18:22:08.575603351Z" level=info msg="TearDown network for sandbox \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\" successfully" Mar 17 18:22:08.584739 env[1938]: time="2025-03-17T18:22:08.584670188Z" level=info msg="RemovePodSandbox \"02377d65435fc96544839270d8594e1edd63de92275bf3bcf5f1a941fea736e4\" returns successfully" Mar 17 18:22:08.587635 env[1938]: time="2025-03-17T18:22:08.587578185Z" level=info msg="StopPodSandbox for \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\"" Mar 17 18:22:08.588369 env[1938]: time="2025-03-17T18:22:08.588288683Z" level=info msg="TearDown network for sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" successfully" Mar 17 18:22:08.588527 env[1938]: time="2025-03-17T18:22:08.588493480Z" level=info msg="StopPodSandbox for \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" returns successfully" Mar 17 18:22:08.590419 env[1938]: time="2025-03-17T18:22:08.590369887Z" level=info msg="RemovePodSandbox for \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\"" Mar 17 18:22:08.590717 env[1938]: time="2025-03-17T18:22:08.590649361Z" level=info msg="Forcibly stopping sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\"" Mar 17 18:22:08.591053 env[1938]: time="2025-03-17T18:22:08.591011588Z" level=info msg="TearDown network for sandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" successfully" Mar 17 18:22:08.595511 env[1938]: time="2025-03-17T18:22:08.595431665Z" level=info msg="RemovePodSandbox \"4f286ed14463995dc71103e91d1773e35f07b7deed291b1da782a13b086c4449\" returns successfully" Mar 17 18:22:08.868131 kubelet[3116]: E0317 18:22:08.867867 3116 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:22:09.084106 kubelet[3116]: I0317 18:22:09.083996 3116 topology_manager.go:215] "Topology Admit Handler" podUID="b16ee7f1-e79c-493f-97ee-f88b18e5aab0" podNamespace="kube-system" podName="cilium-hp7df" Mar 17 18:22:09.084300 kubelet[3116]: E0317 18:22:09.084154 3116 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31622b22-f00f-4101-8aee-9f87d21c6db3" containerName="mount-cgroup" Mar 17 18:22:09.084300 kubelet[3116]: I0317 18:22:09.084237 3116 memory_manager.go:354] "RemoveStaleState removing state" podUID="31622b22-f00f-4101-8aee-9f87d21c6db3" containerName="mount-cgroup" Mar 17 18:22:09.159693 kubelet[3116]: I0317 18:22:09.159559 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-host-proc-sys-net\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.160005 kubelet[3116]: I0317 18:22:09.159974 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-etc-cni-netd\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.160238 kubelet[3116]: I0317 18:22:09.160210 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-lib-modules\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.160454 kubelet[3116]: I0317 18:22:09.160427 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkzx\" (UniqueName: \"kubernetes.io/projected/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-kube-api-access-lgkzx\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.160627 kubelet[3116]: I0317 18:22:09.160601 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-cilium-run\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.160799 kubelet[3116]: I0317 18:22:09.160773 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-clustermesh-secrets\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.160963 kubelet[3116]: I0317 18:22:09.160938 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-hostproc\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.161116 kubelet[3116]: I0317 18:22:09.161090 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-hubble-tls\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.161269 kubelet[3116]: I0317 18:22:09.161244 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-bpf-maps\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.161418 kubelet[3116]: I0317 18:22:09.161393 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-xtables-lock\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.161565 kubelet[3116]: I0317 18:22:09.161540 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-host-proc-sys-kernel\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.161740 kubelet[3116]: I0317 18:22:09.161700 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-cilium-ipsec-secrets\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.161926 kubelet[3116]: I0317 18:22:09.161900 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-cilium-config-path\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.162081 kubelet[3116]: I0317 18:22:09.162043 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-cilium-cgroup\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.162240 kubelet[3116]: I0317 18:22:09.162213 3116 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b16ee7f1-e79c-493f-97ee-f88b18e5aab0-cni-path\") pod \"cilium-hp7df\" (UID: \"b16ee7f1-e79c-493f-97ee-f88b18e5aab0\") " pod="kube-system/cilium-hp7df" Mar 17 18:22:09.395210 env[1938]: time="2025-03-17T18:22:09.395130013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hp7df,Uid:b16ee7f1-e79c-493f-97ee-f88b18e5aab0,Namespace:kube-system,Attempt:0,}" Mar 17 18:22:09.429884 env[1938]: time="2025-03-17T18:22:09.428892112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:22:09.429884 env[1938]: time="2025-03-17T18:22:09.428977037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:22:09.429884 env[1938]: time="2025-03-17T18:22:09.429004110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:22:09.430592 env[1938]: time="2025-03-17T18:22:09.429795551Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45 pid=5056 runtime=io.containerd.runc.v2 Mar 17 18:22:09.589236 env[1938]: time="2025-03-17T18:22:09.589170464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hp7df,Uid:b16ee7f1-e79c-493f-97ee-f88b18e5aab0,Namespace:kube-system,Attempt:0,} returns sandbox id \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\"" Mar 17 18:22:09.597051 env[1938]: time="2025-03-17T18:22:09.596138362Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:22:09.623418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984519600.mount: Deactivated successfully. Mar 17 18:22:09.636304 env[1938]: time="2025-03-17T18:22:09.636199322Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"063152e759956b19d7ef22fb6fe9671de6495476b374c5fc3121bec6b2d9f0b1\"" Mar 17 18:22:09.639514 env[1938]: time="2025-03-17T18:22:09.639241254Z" level=info msg="StartContainer for \"063152e759956b19d7ef22fb6fe9671de6495476b374c5fc3121bec6b2d9f0b1\"" Mar 17 18:22:09.748009 env[1938]: time="2025-03-17T18:22:09.746981923Z" level=info msg="StartContainer for \"063152e759956b19d7ef22fb6fe9671de6495476b374c5fc3121bec6b2d9f0b1\" returns successfully" Mar 17 18:22:09.799379 env[1938]: time="2025-03-17T18:22:09.799266977Z" level=info msg="shim disconnected" id=063152e759956b19d7ef22fb6fe9671de6495476b374c5fc3121bec6b2d9f0b1 Mar 17 18:22:09.799379 env[1938]: time="2025-03-17T18:22:09.799342194Z" level=warning msg="cleaning up after shim disconnected" id=063152e759956b19d7ef22fb6fe9671de6495476b374c5fc3121bec6b2d9f0b1 namespace=k8s.io Mar 17 18:22:09.799379 env[1938]: time="2025-03-17T18:22:09.799364263Z" level=info msg="cleaning up dead shim" Mar 17 18:22:09.814318 env[1938]: time="2025-03-17T18:22:09.814206163Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5141 runtime=io.containerd.runc.v2\n" Mar 17 18:22:10.042629 env[1938]: time="2025-03-17T18:22:10.039777358Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:22:10.057312 env[1938]: time="2025-03-17T18:22:10.057223773Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5d883e0e53c75dbb6876863143a149df56ca4a6dec20b8c9bc811228f5f0f18f\"" Mar 17 18:22:10.059097 env[1938]: time="2025-03-17T18:22:10.059043971Z" level=info msg="StartContainer for \"5d883e0e53c75dbb6876863143a149df56ca4a6dec20b8c9bc811228f5f0f18f\"" Mar 17 18:22:10.163084 env[1938]: time="2025-03-17T18:22:10.154846291Z" level=info msg="StartContainer for \"5d883e0e53c75dbb6876863143a149df56ca4a6dec20b8c9bc811228f5f0f18f\" returns successfully" Mar 17 18:22:10.208860 env[1938]: time="2025-03-17T18:22:10.208716024Z" level=info msg="shim disconnected" id=5d883e0e53c75dbb6876863143a149df56ca4a6dec20b8c9bc811228f5f0f18f Mar 17 18:22:10.209187 env[1938]: time="2025-03-17T18:22:10.208868403Z" level=warning msg="cleaning up after shim disconnected" id=5d883e0e53c75dbb6876863143a149df56ca4a6dec20b8c9bc811228f5f0f18f namespace=k8s.io Mar 17 18:22:10.209187 env[1938]: time="2025-03-17T18:22:10.208894672Z" level=info msg="cleaning up dead shim" Mar 17 18:22:10.222987 env[1938]: time="2025-03-17T18:22:10.222916742Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5205 runtime=io.containerd.runc.v2\n" Mar 17 18:22:10.523015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-063152e759956b19d7ef22fb6fe9671de6495476b374c5fc3121bec6b2d9f0b1-rootfs.mount: Deactivated successfully. Mar 17 18:22:10.541617 kubelet[3116]: E0317 18:22:10.541078 3116 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-65gvt" podUID="a08c6339-2596-4844-a546-da24e90012ad" Mar 17 18:22:10.545656 kubelet[3116]: I0317 18:22:10.545574 3116 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31622b22-f00f-4101-8aee-9f87d21c6db3" path="/var/lib/kubelet/pods/31622b22-f00f-4101-8aee-9f87d21c6db3/volumes" Mar 17 18:22:11.051930 env[1938]: time="2025-03-17T18:22:11.051405862Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:22:11.086769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839170136.mount: Deactivated successfully. Mar 17 18:22:11.095108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476898872.mount: Deactivated successfully. Mar 17 18:22:11.101029 env[1938]: time="2025-03-17T18:22:11.100945348Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d88b80d3302a8c9e8b9b8d0979c804539deb33e6604de24551abdf7bb29f007a\"" Mar 17 18:22:11.102315 env[1938]: time="2025-03-17T18:22:11.102263012Z" level=info msg="StartContainer for \"d88b80d3302a8c9e8b9b8d0979c804539deb33e6604de24551abdf7bb29f007a\"" Mar 17 18:22:11.224368 env[1938]: time="2025-03-17T18:22:11.220664666Z" level=info msg="StartContainer for \"d88b80d3302a8c9e8b9b8d0979c804539deb33e6604de24551abdf7bb29f007a\" returns successfully" Mar 17 18:22:11.260222 env[1938]: time="2025-03-17T18:22:11.260160345Z" level=info msg="shim disconnected" id=d88b80d3302a8c9e8b9b8d0979c804539deb33e6604de24551abdf7bb29f007a Mar 17 18:22:11.260572 env[1938]: time="2025-03-17T18:22:11.260539517Z" level=warning msg="cleaning up after shim disconnected" id=d88b80d3302a8c9e8b9b8d0979c804539deb33e6604de24551abdf7bb29f007a namespace=k8s.io Mar 17 18:22:11.260713 env[1938]: time="2025-03-17T18:22:11.260685848Z" level=info msg="cleaning up dead shim" Mar 17 18:22:11.274534 env[1938]: time="2025-03-17T18:22:11.274471923Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5263 runtime=io.containerd.runc.v2\n" Mar 17 18:22:11.522885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d88b80d3302a8c9e8b9b8d0979c804539deb33e6604de24551abdf7bb29f007a-rootfs.mount: Deactivated successfully. Mar 17 18:22:11.884974 kubelet[3116]: I0317 18:22:11.884804 3116 setters.go:580] "Node became not ready" node="ip-172-31-23-13" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:22:11Z","lastTransitionTime":"2025-03-17T18:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:22:12.053514 env[1938]: time="2025-03-17T18:22:12.053437826Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:22:12.082103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737785232.mount: Deactivated successfully. Mar 17 18:22:12.098043 env[1938]: time="2025-03-17T18:22:12.097977449Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fee47a50a2d65c7a59d9b2731496a2257a24af8b5e912f35616d27a3ae5abd12\"" Mar 17 18:22:12.101870 env[1938]: time="2025-03-17T18:22:12.101805159Z" level=info msg="StartContainer for \"fee47a50a2d65c7a59d9b2731496a2257a24af8b5e912f35616d27a3ae5abd12\"" Mar 17 18:22:12.217727 env[1938]: time="2025-03-17T18:22:12.217664579Z" level=info msg="StartContainer for \"fee47a50a2d65c7a59d9b2731496a2257a24af8b5e912f35616d27a3ae5abd12\" returns successfully" Mar 17 18:22:12.252573 env[1938]: time="2025-03-17T18:22:12.252512373Z" level=info msg="shim disconnected" id=fee47a50a2d65c7a59d9b2731496a2257a24af8b5e912f35616d27a3ae5abd12 Mar 17 18:22:12.253010 env[1938]: time="2025-03-17T18:22:12.252976771Z" level=warning msg="cleaning up after shim disconnected" id=fee47a50a2d65c7a59d9b2731496a2257a24af8b5e912f35616d27a3ae5abd12 namespace=k8s.io Mar 17 18:22:12.253149 env[1938]: time="2025-03-17T18:22:12.253121038Z" level=info msg="cleaning up dead shim" Mar 17 18:22:12.268572 env[1938]: time="2025-03-17T18:22:12.268516614Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5318 runtime=io.containerd.runc.v2\n" Mar 17 18:22:12.523030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fee47a50a2d65c7a59d9b2731496a2257a24af8b5e912f35616d27a3ae5abd12-rootfs.mount: Deactivated successfully. Mar 17 18:22:12.546608 kubelet[3116]: E0317 18:22:12.546547 3116 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-65gvt" podUID="a08c6339-2596-4844-a546-da24e90012ad" Mar 17 18:22:13.064657 env[1938]: time="2025-03-17T18:22:13.064542082Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:22:13.105430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165162109.mount: Deactivated successfully. Mar 17 18:22:13.137790 env[1938]: time="2025-03-17T18:22:13.137334732Z" level=info msg="CreateContainer within sandbox \"264bcd1adbe3f1c01af9cd77f885c8519ef17b7bb8542ed40afb93cb9118cb45\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a770dab74a2ae7badc92ee411b15017dec84e9591bbec1b9b3b19e1cf9363ff4\"" Mar 17 18:22:13.150633 env[1938]: time="2025-03-17T18:22:13.150579852Z" level=info msg="StartContainer for \"a770dab74a2ae7badc92ee411b15017dec84e9591bbec1b9b3b19e1cf9363ff4\"" Mar 17 18:22:13.153006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167844643.mount: Deactivated successfully. Mar 17 18:22:13.326618 env[1938]: time="2025-03-17T18:22:13.325199052Z" level=info msg="StartContainer for \"a770dab74a2ae7badc92ee411b15017dec84e9591bbec1b9b3b19e1cf9363ff4\" returns successfully" Mar 17 18:22:14.147802 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:22:14.699691 systemd[1]: run-containerd-runc-k8s.io-a770dab74a2ae7badc92ee411b15017dec84e9591bbec1b9b3b19e1cf9363ff4-runc.bapOEZ.mount: Deactivated successfully. Mar 17 18:22:16.934544 systemd[1]: run-containerd-runc-k8s.io-a770dab74a2ae7badc92ee411b15017dec84e9591bbec1b9b3b19e1cf9363ff4-runc.CPRJ17.mount: Deactivated successfully. Mar 17 18:22:18.271966 systemd-networkd[1604]: lxc_health: Link UP Mar 17 18:22:18.281187 (udev-worker)[5886]: Network interface NamePolicy= disabled on kernel command line. Mar 17 18:22:18.302948 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:22:18.302594 systemd-networkd[1604]: lxc_health: Gained carrier Mar 17 18:22:19.273642 systemd[1]: run-containerd-runc-k8s.io-a770dab74a2ae7badc92ee411b15017dec84e9591bbec1b9b3b19e1cf9363ff4-runc.RILDuL.mount: Deactivated successfully. Mar 17 18:22:19.466961 kubelet[3116]: I0317 18:22:19.466624 3116 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hp7df" podStartSLOduration=10.466597985 podStartE2EDuration="10.466597985s" podCreationTimestamp="2025-03-17 18:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:22:14.096344448 +0000 UTC m=+125.850223367" watchObservedRunningTime="2025-03-17 18:22:19.466597985 +0000 UTC m=+131.220476856" Mar 17 18:22:20.030947 systemd-networkd[1604]: lxc_health: Gained IPv6LL Mar 17 18:22:21.701992 systemd[1]: run-containerd-runc-k8s.io-a770dab74a2ae7badc92ee411b15017dec84e9591bbec1b9b3b19e1cf9363ff4-runc.nLpWea.mount: Deactivated successfully. Mar 17 18:22:26.384048 systemd[1]: run-containerd-runc-k8s.io-a770dab74a2ae7badc92ee411b15017dec84e9591bbec1b9b3b19e1cf9363ff4-runc.omVAwu.mount: Deactivated successfully. Mar 17 18:22:26.528133 sshd[4983]: pam_unix(sshd:session): session closed for user core Mar 17 18:22:26.533847 systemd-logind[1925]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:22:26.534615 systemd[1]: sshd@26-172.31.23.13:22-139.178.89.65:58236.service: Deactivated successfully. Mar 17 18:22:26.537444 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:22:26.539830 systemd-logind[1925]: Removed session 27. Mar 17 18:22:41.432764 kubelet[3116]: E0317 18:22:41.432677 3116 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:22:41.564596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2d0f7033f15b8f4f13a7d395bc50d113f82622c833cbc339e4ba1b94ad6c848-rootfs.mount: Deactivated successfully. Mar 17 18:22:41.594591 env[1938]: time="2025-03-17T18:22:41.594481833Z" level=info msg="shim disconnected" id=f2d0f7033f15b8f4f13a7d395bc50d113f82622c833cbc339e4ba1b94ad6c848 Mar 17 18:22:41.594591 env[1938]: time="2025-03-17T18:22:41.594585504Z" level=warning msg="cleaning up after shim disconnected" id=f2d0f7033f15b8f4f13a7d395bc50d113f82622c833cbc339e4ba1b94ad6c848 namespace=k8s.io Mar 17 18:22:41.595422 env[1938]: time="2025-03-17T18:22:41.594610548Z" level=info msg="cleaning up dead shim" Mar 17 18:22:41.611054 env[1938]: time="2025-03-17T18:22:41.610981368Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6024 runtime=io.containerd.runc.v2\n" Mar 17 18:22:42.133007 kubelet[3116]: I0317 18:22:42.131964 3116 scope.go:117] "RemoveContainer" containerID="f2d0f7033f15b8f4f13a7d395bc50d113f82622c833cbc339e4ba1b94ad6c848" Mar 17 18:22:42.137142 env[1938]: time="2025-03-17T18:22:42.137088461Z" level=info msg="CreateContainer within sandbox \"5ae254889c397f5ba58ee890b88bb01c6de5e147e9df327942930d0a2077f8c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 18:22:42.163892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710039247.mount: Deactivated successfully. Mar 17 18:22:42.172481 env[1938]: time="2025-03-17T18:22:42.172401372Z" level=info msg="CreateContainer within sandbox \"5ae254889c397f5ba58ee890b88bb01c6de5e147e9df327942930d0a2077f8c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"eefbc86b29f61e175a433d6ed7a1c1f9c083465f384c9ba8c8cac2fe7a34d6f9\"" Mar 17 18:22:42.173151 env[1938]: time="2025-03-17T18:22:42.173099513Z" level=info msg="StartContainer for \"eefbc86b29f61e175a433d6ed7a1c1f9c083465f384c9ba8c8cac2fe7a34d6f9\"" Mar 17 18:22:42.308815 env[1938]: time="2025-03-17T18:22:42.307576975Z" level=info msg="StartContainer for \"eefbc86b29f61e175a433d6ed7a1c1f9c083465f384c9ba8c8cac2fe7a34d6f9\" returns successfully" Mar 17 18:22:46.638714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1da3c9c5f3f42507da09372a2cbf6c21b65834288e64979121ab023f77935668-rootfs.mount: Deactivated successfully. Mar 17 18:22:46.657554 env[1938]: time="2025-03-17T18:22:46.657493349Z" level=info msg="shim disconnected" id=1da3c9c5f3f42507da09372a2cbf6c21b65834288e64979121ab023f77935668 Mar 17 18:22:46.658478 env[1938]: time="2025-03-17T18:22:46.658439464Z" level=warning msg="cleaning up after shim disconnected" id=1da3c9c5f3f42507da09372a2cbf6c21b65834288e64979121ab023f77935668 namespace=k8s.io Mar 17 18:22:46.658627 env[1938]: time="2025-03-17T18:22:46.658599021Z" level=info msg="cleaning up dead shim" Mar 17 18:22:46.673558 env[1938]: time="2025-03-17T18:22:46.673502485Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:22:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6084 runtime=io.containerd.runc.v2\n" Mar 17 18:22:47.151474 kubelet[3116]: I0317 18:22:47.151417 3116 scope.go:117] "RemoveContainer" containerID="1da3c9c5f3f42507da09372a2cbf6c21b65834288e64979121ab023f77935668" Mar 17 18:22:47.154895 env[1938]: time="2025-03-17T18:22:47.154841261Z" level=info msg="CreateContainer within sandbox \"2c73bd74ebd30ebc4395dfc02243d0013e2ae1a1c86747298eb5a5a10303dd97\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 18:22:47.181389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022862344.mount: Deactivated successfully. Mar 17 18:22:47.189302 env[1938]: time="2025-03-17T18:22:47.189239207Z" level=info msg="CreateContainer within sandbox \"2c73bd74ebd30ebc4395dfc02243d0013e2ae1a1c86747298eb5a5a10303dd97\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e7671eb0b2d84a7dab04e0df77951f640a5ce10c95efdb52b2814868c4700b34\"" Mar 17 18:22:47.190776 env[1938]: time="2025-03-17T18:22:47.190692540Z" level=info msg="StartContainer for \"e7671eb0b2d84a7dab04e0df77951f640a5ce10c95efdb52b2814868c4700b34\"" Mar 17 18:22:47.318059 env[1938]: time="2025-03-17T18:22:47.317974086Z" level=info msg="StartContainer for \"e7671eb0b2d84a7dab04e0df77951f640a5ce10c95efdb52b2814868c4700b34\" returns successfully" Mar 17 18:22:51.434035 kubelet[3116]: E0317 18:22:51.433517 3116 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 18:23:01.434920 kubelet[3116]: E0317 18:23:01.434860 3116 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"