Apr 12 18:20:36.024389 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 12 18:20:36.024429 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Apr 12 17:21:24 -00 2024 Apr 12 18:20:36.024453 kernel: efi: EFI v2.70 by EDK II Apr 12 18:20:36.024469 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7174cf98 Apr 12 18:20:36.024483 kernel: ACPI: Early table checksum verification disabled Apr 12 18:20:36.024497 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 12 18:20:36.024514 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 12 18:20:36.024528 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 12 18:20:36.024542 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Apr 12 18:20:36.024556 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 12 18:20:36.024576 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 12 18:20:36.024590 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 12 18:20:36.024604 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 12 18:20:36.024619 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 12 18:20:36.024636 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 12 18:20:36.024656 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 12 18:20:36.024671 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 12 18:20:36.024685 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 12 18:20:36.024700 kernel: printk: bootconsole [uart0] enabled Apr 12 18:20:36.024715 kernel: NUMA: Failed to initialise from firmware Apr 12 18:20:36.024731 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 12 18:20:36.024747 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Apr 12 18:20:36.024762 kernel: Zone ranges: Apr 12 18:20:36.024777 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 12 18:20:36.024792 kernel: DMA32 empty Apr 12 18:20:36.024807 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 12 18:20:36.024828 kernel: Movable zone start for each node Apr 12 18:20:36.024843 kernel: Early memory node ranges Apr 12 18:20:36.024857 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Apr 12 18:20:36.024872 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 12 18:20:36.024887 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 12 18:20:36.024901 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 12 18:20:36.024917 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 12 18:20:36.024931 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 12 18:20:36.024946 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 12 18:20:36.024961 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 12 18:20:36.024975 kernel: psci: probing for conduit method from ACPI. Apr 12 18:20:36.024990 kernel: psci: PSCIv1.0 detected in firmware. Apr 12 18:20:36.025009 kernel: psci: Using standard PSCI v0.2 function IDs Apr 12 18:20:36.025024 kernel: psci: Trusted OS migration not required Apr 12 18:20:36.025085 kernel: psci: SMC Calling Convention v1.1 Apr 12 18:20:36.025103 kernel: ACPI: SRAT not present Apr 12 18:20:36.025120 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Apr 12 18:20:36.025140 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Apr 12 18:20:36.025157 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 12 18:20:36.025172 kernel: Detected PIPT I-cache on CPU0 Apr 12 18:20:36.025188 kernel: CPU features: detected: GIC system register CPU interface Apr 12 18:20:36.025203 kernel: CPU features: detected: Spectre-v2 Apr 12 18:20:36.025218 kernel: CPU features: detected: Spectre-v3a Apr 12 18:20:36.025234 kernel: CPU features: detected: Spectre-BHB Apr 12 18:20:36.025249 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 12 18:20:36.025265 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 12 18:20:36.025280 kernel: CPU features: detected: ARM erratum 1742098 Apr 12 18:20:36.025296 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 12 18:20:36.025315 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 12 18:20:36.025331 kernel: Policy zone: Normal Apr 12 18:20:36.025349 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:20:36.025366 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:20:36.025382 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:20:36.025398 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:20:36.025413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:20:36.025429 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 12 18:20:36.025446 kernel: Memory: 3824652K/4030464K available (9792K kernel code, 2092K rwdata, 7568K rodata, 36352K init, 777K bss, 205812K reserved, 0K cma-reserved) Apr 12 18:20:36.025462 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 12 18:20:36.025482 kernel: trace event string verifier disabled Apr 12 18:20:36.025498 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 12 18:20:36.025514 kernel: rcu: RCU event tracing is enabled. Apr 12 18:20:36.025530 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 12 18:20:36.025546 kernel: Trampoline variant of Tasks RCU enabled. Apr 12 18:20:36.025561 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:20:36.025578 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:20:36.025593 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 12 18:20:36.025609 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 12 18:20:36.025624 kernel: GICv3: 96 SPIs implemented Apr 12 18:20:36.025639 kernel: GICv3: 0 Extended SPIs implemented Apr 12 18:20:36.025654 kernel: GICv3: Distributor has no Range Selector support Apr 12 18:20:36.025674 kernel: Root IRQ handler: gic_handle_irq Apr 12 18:20:36.025689 kernel: GICv3: 16 PPIs implemented Apr 12 18:20:36.025705 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 12 18:20:36.025720 kernel: ACPI: SRAT not present Apr 12 18:20:36.025735 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 12 18:20:36.025750 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Apr 12 18:20:36.025766 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Apr 12 18:20:36.025781 kernel: GICv3: using LPI property table @0x00000004000c0000 Apr 12 18:20:36.025797 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 12 18:20:36.025812 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Apr 12 18:20:36.025827 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 12 18:20:36.025848 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 12 18:20:36.025863 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 12 18:20:36.025879 kernel: Console: colour dummy device 80x25 Apr 12 18:20:36.025895 kernel: printk: console [tty1] enabled Apr 12 18:20:36.025911 kernel: ACPI: Core revision 20210730 Apr 12 18:20:36.025927 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 12 18:20:36.025943 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:20:36.025959 kernel: LSM: Security Framework initializing Apr 12 18:20:36.025975 kernel: SELinux: Initializing. Apr 12 18:20:36.025991 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:20:36.027286 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 12 18:20:36.027305 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:20:36.027321 kernel: Platform MSI: ITS@0x10080000 domain created Apr 12 18:20:36.027337 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 12 18:20:36.027353 kernel: Remapping and enabling EFI services. Apr 12 18:20:36.027369 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:20:36.027385 kernel: Detected PIPT I-cache on CPU1 Apr 12 18:20:36.027401 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 12 18:20:36.027417 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Apr 12 18:20:36.027440 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 12 18:20:36.027456 kernel: smp: Brought up 1 node, 2 CPUs Apr 12 18:20:36.027472 kernel: SMP: Total of 2 processors activated. Apr 12 18:20:36.027488 kernel: CPU features: detected: 32-bit EL0 Support Apr 12 18:20:36.027504 kernel: CPU features: detected: 32-bit EL1 Support Apr 12 18:20:36.027520 kernel: CPU features: detected: CRC32 instructions Apr 12 18:20:36.027535 kernel: CPU: All CPU(s) started at EL1 Apr 12 18:20:36.027551 kernel: alternatives: patching kernel code Apr 12 18:20:36.027567 kernel: devtmpfs: initialized Apr 12 18:20:36.027855 kernel: KASLR disabled due to lack of seed Apr 12 18:20:36.027878 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:20:36.027895 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 12 18:20:36.027925 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:20:36.027946 kernel: SMBIOS 3.0.0 present. Apr 12 18:20:36.027963 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 12 18:20:36.027980 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:20:36.027997 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 12 18:20:36.028014 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 12 18:20:36.028075 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 12 18:20:36.028098 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:20:36.028116 kernel: audit: type=2000 audit(0.262:1): state=initialized audit_enabled=0 res=1 Apr 12 18:20:36.028139 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:20:36.028156 kernel: cpuidle: using governor menu Apr 12 18:20:36.028173 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 12 18:20:36.028190 kernel: ASID allocator initialised with 32768 entries Apr 12 18:20:36.028207 kernel: ACPI: bus type PCI registered Apr 12 18:20:36.028228 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:20:36.028245 kernel: Serial: AMBA PL011 UART driver Apr 12 18:20:36.028262 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:20:36.028278 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Apr 12 18:20:36.028295 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:20:36.028312 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Apr 12 18:20:36.028328 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:20:36.028345 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 12 18:20:36.028361 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:20:36.028383 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:20:36.028400 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:20:36.028417 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:20:36.028433 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:20:36.028450 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:20:36.028466 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:20:36.028483 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 12 18:20:36.028499 kernel: ACPI: Interpreter enabled Apr 12 18:20:36.028515 kernel: ACPI: Using GIC for interrupt routing Apr 12 18:20:36.028536 kernel: ACPI: MCFG table detected, 1 entries Apr 12 18:20:36.028553 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Apr 12 18:20:36.028875 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:20:36.030210 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 12 18:20:36.030446 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 12 18:20:36.030657 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Apr 12 18:20:36.030857 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Apr 12 18:20:36.030894 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 12 18:20:36.030912 kernel: acpiphp: Slot [1] registered Apr 12 18:20:36.030929 kernel: acpiphp: Slot [2] registered Apr 12 18:20:36.030946 kernel: acpiphp: Slot [3] registered Apr 12 18:20:36.030963 kernel: acpiphp: Slot [4] registered Apr 12 18:20:36.030980 kernel: acpiphp: Slot [5] registered Apr 12 18:20:36.030996 kernel: acpiphp: Slot [6] registered Apr 12 18:20:36.031013 kernel: acpiphp: Slot [7] registered Apr 12 18:20:36.031071 kernel: acpiphp: Slot [8] registered Apr 12 18:20:36.031098 kernel: acpiphp: Slot [9] registered Apr 12 18:20:36.031115 kernel: acpiphp: Slot [10] registered Apr 12 18:20:36.031131 kernel: acpiphp: Slot [11] registered Apr 12 18:20:36.031147 kernel: acpiphp: Slot [12] registered Apr 12 18:20:36.031164 kernel: acpiphp: Slot [13] registered Apr 12 18:20:36.031180 kernel: acpiphp: Slot [14] registered Apr 12 18:20:36.031196 kernel: acpiphp: Slot [15] registered Apr 12 18:20:36.031212 kernel: acpiphp: Slot [16] registered Apr 12 18:20:36.031229 kernel: acpiphp: Slot [17] registered Apr 12 18:20:36.031245 kernel: acpiphp: Slot [18] registered Apr 12 18:20:36.031266 kernel: acpiphp: Slot [19] registered Apr 12 18:20:36.031283 kernel: acpiphp: Slot [20] registered Apr 12 18:20:36.031299 kernel: acpiphp: Slot [21] registered Apr 12 18:20:36.031316 kernel: acpiphp: Slot [22] registered Apr 12 18:20:36.031333 kernel: acpiphp: Slot [23] registered Apr 12 18:20:36.031349 kernel: acpiphp: Slot [24] registered Apr 12 18:20:36.031365 kernel: acpiphp: Slot [25] registered Apr 12 18:20:36.031381 kernel: acpiphp: Slot [26] registered Apr 12 18:20:36.031398 kernel: acpiphp: Slot [27] registered Apr 12 18:20:36.031420 kernel: acpiphp: Slot [28] registered Apr 12 18:20:36.031438 kernel: acpiphp: Slot [29] registered Apr 12 18:20:36.031454 kernel: acpiphp: Slot [30] registered Apr 12 18:20:36.031471 kernel: acpiphp: Slot [31] registered Apr 12 18:20:36.031487 kernel: PCI host bridge to bus 0000:00 Apr 12 18:20:36.031786 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 12 18:20:36.031987 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 12 18:20:36.034308 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 12 18:20:36.034529 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Apr 12 18:20:36.034779 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 12 18:20:36.035021 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 12 18:20:36.048406 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 12 18:20:36.048668 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 12 18:20:36.048891 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 12 18:20:36.049205 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 12 18:20:36.049457 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 12 18:20:36.049684 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 12 18:20:36.049938 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 12 18:20:36.050253 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 12 18:20:36.050491 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 12 18:20:36.050746 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Apr 12 18:20:36.051076 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Apr 12 18:20:36.051356 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Apr 12 18:20:36.051644 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Apr 12 18:20:36.051923 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Apr 12 18:20:36.052244 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 12 18:20:36.052448 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 12 18:20:36.052639 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 12 18:20:36.052676 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 12 18:20:36.052694 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 12 18:20:36.052712 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 12 18:20:36.052729 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 12 18:20:36.052746 kernel: iommu: Default domain type: Translated Apr 12 18:20:36.052763 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 12 18:20:36.052780 kernel: vgaarb: loaded Apr 12 18:20:36.052796 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:20:36.052813 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:20:36.052835 kernel: PTP clock support registered Apr 12 18:20:36.052852 kernel: Registered efivars operations Apr 12 18:20:36.052868 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 12 18:20:36.052885 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:20:36.052901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:20:36.052918 kernel: pnp: PnP ACPI init Apr 12 18:20:36.056404 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 12 18:20:36.056458 kernel: pnp: PnP ACPI: found 1 devices Apr 12 18:20:36.056476 kernel: NET: Registered PF_INET protocol family Apr 12 18:20:36.056504 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:20:36.056523 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 12 18:20:36.056540 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:20:36.056558 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:20:36.056575 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Apr 12 18:20:36.056591 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 12 18:20:36.056608 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:20:36.056625 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 12 18:20:36.056642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:20:36.056664 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:20:36.056682 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 12 18:20:36.056698 kernel: kvm [1]: HYP mode not available Apr 12 18:20:36.056715 kernel: Initialise system trusted keyrings Apr 12 18:20:36.056732 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 12 18:20:36.056749 kernel: Key type asymmetric registered Apr 12 18:20:36.056766 kernel: Asymmetric key parser 'x509' registered Apr 12 18:20:36.056783 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:20:36.056800 kernel: io scheduler mq-deadline registered Apr 12 18:20:36.056821 kernel: io scheduler kyber registered Apr 12 18:20:36.056838 kernel: io scheduler bfq registered Apr 12 18:20:36.057145 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 12 18:20:36.057177 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 12 18:20:36.057195 kernel: ACPI: button: Power Button [PWRB] Apr 12 18:20:36.057212 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:20:36.057230 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 12 18:20:36.057444 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 12 18:20:36.057478 kernel: printk: console [ttyS0] disabled Apr 12 18:20:36.057495 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 12 18:20:36.057512 kernel: printk: console [ttyS0] enabled Apr 12 18:20:36.057529 kernel: printk: bootconsole [uart0] disabled Apr 12 18:20:36.057546 kernel: thunder_xcv, ver 1.0 Apr 12 18:20:36.057562 kernel: thunder_bgx, ver 1.0 Apr 12 18:20:36.057578 kernel: nicpf, ver 1.0 Apr 12 18:20:36.057595 kernel: nicvf, ver 1.0 Apr 12 18:20:36.057815 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 12 18:20:36.058062 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-04-12T18:20:35 UTC (1712946035) Apr 12 18:20:36.058096 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 12 18:20:36.058114 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:20:36.058131 kernel: Segment Routing with IPv6 Apr 12 18:20:36.058147 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:20:36.058164 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:20:36.058181 kernel: Key type dns_resolver registered Apr 12 18:20:36.058198 kernel: registered taskstats version 1 Apr 12 18:20:36.058224 kernel: Loading compiled-in X.509 certificates Apr 12 18:20:36.058242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 8c258d82bbd8df4a9da2c0ea4108142f04be6b34' Apr 12 18:20:36.058258 kernel: Key type .fscrypt registered Apr 12 18:20:36.058276 kernel: Key type fscrypt-provisioning registered Apr 12 18:20:36.058292 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:20:36.058308 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:20:36.058325 kernel: ima: No architecture policies found Apr 12 18:20:36.058341 kernel: Freeing unused kernel memory: 36352K Apr 12 18:20:36.058358 kernel: Run /init as init process Apr 12 18:20:36.058382 kernel: with arguments: Apr 12 18:20:36.058400 kernel: /init Apr 12 18:20:36.058417 kernel: with environment: Apr 12 18:20:36.058433 kernel: HOME=/ Apr 12 18:20:36.058449 kernel: TERM=linux Apr 12 18:20:36.058466 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:20:36.058489 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:20:36.058512 systemd[1]: Detected virtualization amazon. Apr 12 18:20:36.058539 systemd[1]: Detected architecture arm64. Apr 12 18:20:36.058558 systemd[1]: Running in initrd. Apr 12 18:20:36.058576 systemd[1]: No hostname configured, using default hostname. Apr 12 18:20:36.058593 systemd[1]: Hostname set to . Apr 12 18:20:36.058612 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:20:36.058630 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:20:36.058649 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:20:36.058666 systemd[1]: Reached target cryptsetup.target. Apr 12 18:20:36.058708 systemd[1]: Reached target paths.target. Apr 12 18:20:36.058731 systemd[1]: Reached target slices.target. Apr 12 18:20:36.058749 systemd[1]: Reached target swap.target. Apr 12 18:20:36.058767 systemd[1]: Reached target timers.target. Apr 12 18:20:36.058786 systemd[1]: Listening on iscsid.socket. Apr 12 18:20:36.058804 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:20:36.058822 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:20:36.058840 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:20:36.058864 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:20:36.058882 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:20:36.058915 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:20:36.058935 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:20:36.058954 systemd[1]: Reached target sockets.target. Apr 12 18:20:36.058973 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:20:36.058991 systemd[1]: Finished network-cleanup.service. Apr 12 18:20:36.059009 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:20:36.059053 systemd[1]: Starting systemd-journald.service... Apr 12 18:20:36.059088 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:20:36.059108 systemd[1]: Starting systemd-resolved.service... Apr 12 18:20:36.059126 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:20:36.059144 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:20:36.059178 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:20:36.059199 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:20:36.059217 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:20:36.059237 kernel: audit: type=1130 audit(1712946036.034:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.059263 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:20:36.059287 systemd-journald[268]: Journal started Apr 12 18:20:36.059422 systemd-journald[268]: Runtime Journal (/run/log/journal/ec27e25a5475e0cc6544825f024b6706) is 8.0M, max 75.4M, 67.4M free. Apr 12 18:20:36.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.014649 systemd-modules-load[269]: Inserted module 'overlay' Apr 12 18:20:36.072596 systemd[1]: Started systemd-journald.service. Apr 12 18:20:36.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.079161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:20:36.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.098660 kernel: audit: type=1130 audit(1712946036.077:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.098735 kernel: audit: type=1130 audit(1712946036.086:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.111795 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:20:36.107849 systemd-resolved[270]: Positive Trust Anchors: Apr 12 18:20:36.107881 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:20:36.107946 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:20:36.128989 systemd-modules-load[269]: Inserted module 'br_netfilter' Apr 12 18:20:36.129234 kernel: Bridge firewalling registered Apr 12 18:20:36.138164 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:20:36.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.150632 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:20:36.156098 kernel: audit: type=1130 audit(1712946036.140:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.168067 kernel: SCSI subsystem initialized Apr 12 18:20:36.180770 dracut-cmdline[285]: dracut-dracut-053 Apr 12 18:20:36.195718 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:20:36.195796 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:20:36.196489 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c0b96868344262519ffdb2dae3782c942008a0fecdbc0bc85d2e170bd2e8b8a8 Apr 12 18:20:36.210982 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:20:36.211495 systemd-modules-load[269]: Inserted module 'dm_multipath' Apr 12 18:20:36.214839 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:20:36.220080 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:20:36.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.241087 kernel: audit: type=1130 audit(1712946036.217:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.247562 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:20:36.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.259080 kernel: audit: type=1130 audit(1712946036.249:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.383068 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:20:36.406078 kernel: iscsi: registered transport (tcp) Apr 12 18:20:36.434171 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:20:36.434248 kernel: QLogic iSCSI HBA Driver Apr 12 18:20:36.602063 kernel: random: crng init done Apr 12 18:20:36.602002 systemd-resolved[270]: Defaulting to hostname 'linux'. Apr 12 18:20:36.605762 systemd[1]: Started systemd-resolved.service. Apr 12 18:20:36.609860 systemd[1]: Reached target nss-lookup.target. Apr 12 18:20:36.627830 kernel: audit: type=1130 audit(1712946036.608:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.638888 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:20:36.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.647905 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:20:36.660971 kernel: audit: type=1130 audit(1712946036.638:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:36.720104 kernel: raid6: neonx8 gen() 6355 MB/s Apr 12 18:20:36.738092 kernel: raid6: neonx8 xor() 4720 MB/s Apr 12 18:20:36.756096 kernel: raid6: neonx4 gen() 6481 MB/s Apr 12 18:20:36.774095 kernel: raid6: neonx4 xor() 4951 MB/s Apr 12 18:20:36.792089 kernel: raid6: neonx2 gen() 5734 MB/s Apr 12 18:20:36.810089 kernel: raid6: neonx2 xor() 4538 MB/s Apr 12 18:20:36.828084 kernel: raid6: neonx1 gen() 4435 MB/s Apr 12 18:20:36.846088 kernel: raid6: neonx1 xor() 3674 MB/s Apr 12 18:20:36.864091 kernel: raid6: int64x8 gen() 3404 MB/s Apr 12 18:20:36.882096 kernel: raid6: int64x8 xor() 2073 MB/s Apr 12 18:20:36.900085 kernel: raid6: int64x4 gen() 3799 MB/s Apr 12 18:20:36.918088 kernel: raid6: int64x4 xor() 2183 MB/s Apr 12 18:20:36.936088 kernel: raid6: int64x2 gen() 3571 MB/s Apr 12 18:20:36.954091 kernel: raid6: int64x2 xor() 1935 MB/s Apr 12 18:20:36.972087 kernel: raid6: int64x1 gen() 2729 MB/s Apr 12 18:20:36.991609 kernel: raid6: int64x1 xor() 1445 MB/s Apr 12 18:20:36.991684 kernel: raid6: using algorithm neonx4 gen() 6481 MB/s Apr 12 18:20:36.991708 kernel: raid6: .... xor() 4951 MB/s, rmw enabled Apr 12 18:20:36.993416 kernel: raid6: using neon recovery algorithm Apr 12 18:20:37.013096 kernel: xor: measuring software checksum speed Apr 12 18:20:37.016085 kernel: 8regs : 9413 MB/sec Apr 12 18:20:37.019084 kernel: 32regs : 11141 MB/sec Apr 12 18:20:37.022760 kernel: arm64_neon : 9644 MB/sec Apr 12 18:20:37.022828 kernel: xor: using function: 32regs (11141 MB/sec) Apr 12 18:20:37.119092 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Apr 12 18:20:37.139892 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:20:37.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:37.142000 audit: BPF prog-id=7 op=LOAD Apr 12 18:20:37.151050 kernel: audit: type=1130 audit(1712946037.140:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:37.149000 audit: BPF prog-id=8 op=LOAD Apr 12 18:20:37.151707 systemd[1]: Starting systemd-udevd.service... Apr 12 18:20:37.181842 systemd-udevd[467]: Using default interface naming scheme 'v252'. Apr 12 18:20:37.194439 systemd[1]: Started systemd-udevd.service. Apr 12 18:20:37.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:37.217811 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:20:37.254156 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Apr 12 18:20:37.326732 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:20:37.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:37.331233 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:20:37.448647 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:20:37.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:37.598963 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 12 18:20:37.599082 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 12 18:20:37.607825 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 12 18:20:37.608205 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 12 18:20:37.608234 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 12 18:20:37.617907 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 12 18:20:37.618321 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:19:f6:3b:dd:11 Apr 12 18:20:37.625077 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 12 18:20:37.631248 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:20:37.631323 kernel: GPT:9289727 != 16777215 Apr 12 18:20:37.631348 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:20:37.633459 kernel: GPT:9289727 != 16777215 Apr 12 18:20:37.634776 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:20:37.638225 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:20:37.646084 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:20:37.755121 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (518) Apr 12 18:20:37.782832 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:20:37.847539 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:20:37.871296 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:20:37.873813 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:20:37.905497 systemd[1]: Starting disk-uuid.service... Apr 12 18:20:37.920463 disk-uuid[620]: Primary Header is updated. Apr 12 18:20:37.920463 disk-uuid[620]: Secondary Entries is updated. Apr 12 18:20:37.920463 disk-uuid[620]: Secondary Header is updated. Apr 12 18:20:37.937084 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:20:37.947087 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:20:38.275657 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:20:38.952676 disk-uuid[621]: The operation has completed successfully. Apr 12 18:20:38.955116 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:20:39.148854 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:20:39.150982 systemd[1]: Finished disk-uuid.service. Apr 12 18:20:39.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.178178 systemd[1]: Starting verity-setup.service... Apr 12 18:20:39.213094 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 12 18:20:39.306138 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:20:39.313390 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:20:39.320001 systemd[1]: Finished verity-setup.service. Apr 12 18:20:39.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.409087 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:20:39.409466 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:20:39.412743 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:20:39.417003 systemd[1]: Starting ignition-setup.service... Apr 12 18:20:39.420781 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:20:39.451510 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:20:39.451620 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 12 18:20:39.454096 kernel: BTRFS info (device nvme0n1p6): has skinny extents Apr 12 18:20:39.466090 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 12 18:20:39.487288 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:20:39.511059 systemd[1]: Finished ignition-setup.service. Apr 12 18:20:39.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.514951 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:20:39.619397 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:20:39.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.621000 audit: BPF prog-id=9 op=LOAD Apr 12 18:20:39.625846 systemd[1]: Starting systemd-networkd.service... Apr 12 18:20:39.679583 systemd-networkd[1049]: lo: Link UP Apr 12 18:20:39.679611 systemd-networkd[1049]: lo: Gained carrier Apr 12 18:20:39.684002 systemd-networkd[1049]: Enumeration completed Apr 12 18:20:39.684603 systemd-networkd[1049]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:20:39.689946 systemd[1]: Started systemd-networkd.service. Apr 12 18:20:39.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.693814 systemd[1]: Reached target network.target. Apr 12 18:20:39.704570 kernel: kauditd_printk_skb: 11 callbacks suppressed Apr 12 18:20:39.704619 kernel: audit: type=1130 audit(1712946039.692:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.706722 systemd-networkd[1049]: eth0: Link UP Apr 12 18:20:39.706748 systemd-networkd[1049]: eth0: Gained carrier Apr 12 18:20:39.712577 systemd[1]: Starting iscsiuio.service... Apr 12 18:20:39.726259 systemd[1]: Started iscsiuio.service. Apr 12 18:20:39.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.731459 systemd[1]: Starting iscsid.service... Apr 12 18:20:39.737173 systemd-networkd[1049]: eth0: DHCPv4 address 172.31.18.247/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 12 18:20:39.747084 kernel: audit: type=1130 audit(1712946039.728:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.747256 iscsid[1054]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:20:39.753205 iscsid[1054]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:20:39.753205 iscsid[1054]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:20:39.753205 iscsid[1054]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:20:39.753205 iscsid[1054]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:20:39.753205 iscsid[1054]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:20:39.809351 kernel: audit: type=1130 audit(1712946039.766:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.752024 systemd[1]: Started iscsid.service. Apr 12 18:20:39.769349 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:20:39.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.802764 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:20:39.835199 kernel: audit: type=1130 audit(1712946039.806:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.807481 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:20:39.809422 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:20:39.813131 systemd[1]: Reached target remote-fs.target. Apr 12 18:20:39.819210 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:20:39.848403 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:20:39.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:39.859134 kernel: audit: type=1130 audit(1712946039.850:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.295212 ignition[975]: Ignition 2.14.0 Apr 12 18:20:40.295250 ignition[975]: Stage: fetch-offline Apr 12 18:20:40.295706 ignition[975]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:20:40.295777 ignition[975]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:20:40.315955 ignition[975]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:20:40.319447 ignition[975]: Ignition finished successfully Apr 12 18:20:40.323149 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:20:40.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.326917 systemd[1]: Starting ignition-fetch.service... Apr 12 18:20:40.346012 kernel: audit: type=1130 audit(1712946040.322:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.347526 ignition[1073]: Ignition 2.14.0 Apr 12 18:20:40.347580 ignition[1073]: Stage: fetch Apr 12 18:20:40.347964 ignition[1073]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:20:40.348092 ignition[1073]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:20:40.363856 ignition[1073]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:20:40.366353 ignition[1073]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:20:40.373877 ignition[1073]: INFO : PUT result: OK Apr 12 18:20:40.377933 ignition[1073]: DEBUG : parsed url from cmdline: "" Apr 12 18:20:40.380021 ignition[1073]: INFO : no config URL provided Apr 12 18:20:40.380021 ignition[1073]: INFO : reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:20:40.384414 ignition[1073]: INFO : no config at "/usr/lib/ignition/user.ign" Apr 12 18:20:40.384414 ignition[1073]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:20:40.389389 ignition[1073]: INFO : PUT result: OK Apr 12 18:20:40.391086 ignition[1073]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 12 18:20:40.394299 ignition[1073]: INFO : GET result: OK Apr 12 18:20:40.396087 ignition[1073]: DEBUG : parsing config with SHA512: 185f18bcf57661a68f9e6b643a929f23b18b066e0e6f33d3393f4755093215efc7db2386f8b0c00c6c867b70627635b35b1712013c95a8d5e2faa52b360e78b2 Apr 12 18:20:40.470646 unknown[1073]: fetched base config from "system" Apr 12 18:20:40.472710 unknown[1073]: fetched base config from "system" Apr 12 18:20:40.472992 unknown[1073]: fetched user config from "aws" Apr 12 18:20:40.474669 ignition[1073]: fetch: fetch complete Apr 12 18:20:40.474685 ignition[1073]: fetch: fetch passed Apr 12 18:20:40.474807 ignition[1073]: Ignition finished successfully Apr 12 18:20:40.484350 systemd[1]: Finished ignition-fetch.service. Apr 12 18:20:40.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.489445 systemd[1]: Starting ignition-kargs.service... Apr 12 18:20:40.502244 kernel: audit: type=1130 audit(1712946040.486:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.517197 ignition[1079]: Ignition 2.14.0 Apr 12 18:20:40.519072 ignition[1079]: Stage: kargs Apr 12 18:20:40.520825 ignition[1079]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:20:40.523589 ignition[1079]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:20:40.535810 ignition[1079]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:20:40.538753 ignition[1079]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:20:40.542399 ignition[1079]: INFO : PUT result: OK Apr 12 18:20:40.549514 ignition[1079]: kargs: kargs passed Apr 12 18:20:40.549677 ignition[1079]: Ignition finished successfully Apr 12 18:20:40.554443 systemd[1]: Finished ignition-kargs.service. Apr 12 18:20:40.558488 systemd[1]: Starting ignition-disks.service... Apr 12 18:20:40.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.576104 kernel: audit: type=1130 audit(1712946040.553:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.577707 ignition[1085]: Ignition 2.14.0 Apr 12 18:20:40.578368 ignition[1085]: Stage: disks Apr 12 18:20:40.578747 ignition[1085]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:20:40.578812 ignition[1085]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:20:40.597277 ignition[1085]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:20:40.599774 ignition[1085]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:20:40.603313 ignition[1085]: INFO : PUT result: OK Apr 12 18:20:40.609676 ignition[1085]: disks: disks passed Apr 12 18:20:40.609834 ignition[1085]: Ignition finished successfully Apr 12 18:20:40.614341 systemd[1]: Finished ignition-disks.service. Apr 12 18:20:40.645230 kernel: audit: type=1130 audit(1712946040.615:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.616502 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:20:40.625322 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:20:40.627215 systemd[1]: Reached target local-fs.target. Apr 12 18:20:40.629112 systemd[1]: Reached target sysinit.target. Apr 12 18:20:40.629777 systemd[1]: Reached target basic.target. Apr 12 18:20:40.631837 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:20:40.688067 systemd-fsck[1093]: ROOT: clean, 612/553520 files, 56018/553472 blocks Apr 12 18:20:40.697126 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:20:40.702340 systemd[1]: Mounting sysroot.mount... Apr 12 18:20:40.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.714146 kernel: audit: type=1130 audit(1712946040.697:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:40.729070 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:20:40.730887 systemd[1]: Mounted sysroot.mount. Apr 12 18:20:40.734734 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:20:40.752223 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:20:40.756074 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:20:40.756218 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:20:40.766737 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:20:40.773230 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:20:40.805478 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:20:40.808306 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:20:40.818737 systemd-networkd[1049]: eth0: Gained IPv6LL Apr 12 18:20:40.835097 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1110) Apr 12 18:20:40.841259 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:20:40.841338 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 12 18:20:40.843483 kernel: BTRFS info (device nvme0n1p6): has skinny extents Apr 12 18:20:40.851113 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 12 18:20:40.855349 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:20:40.859184 initrd-setup-root[1115]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:20:40.870280 initrd-setup-root[1141]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:20:40.880524 initrd-setup-root[1149]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:20:40.890325 initrd-setup-root[1157]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:20:41.140549 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:20:41.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:41.144406 systemd[1]: Starting ignition-mount.service... Apr 12 18:20:41.148102 systemd[1]: Starting sysroot-boot.service... Apr 12 18:20:41.168183 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 18:20:41.169511 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 18:20:41.201182 ignition[1176]: INFO : Ignition 2.14.0 Apr 12 18:20:41.201182 ignition[1176]: INFO : Stage: mount Apr 12 18:20:41.201182 ignition[1176]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:20:41.201182 ignition[1176]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:20:41.206642 systemd[1]: Finished sysroot-boot.service. Apr 12 18:20:41.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:41.223736 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:20:41.226378 ignition[1176]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:20:41.229758 ignition[1176]: INFO : PUT result: OK Apr 12 18:20:41.236072 ignition[1176]: INFO : mount: mount passed Apr 12 18:20:41.237809 ignition[1176]: INFO : Ignition finished successfully Apr 12 18:20:41.240918 systemd[1]: Finished ignition-mount.service. Apr 12 18:20:41.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:20:41.245379 systemd[1]: Starting ignition-files.service... Apr 12 18:20:41.262341 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:20:41.279094 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1185) Apr 12 18:20:41.285588 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 12 18:20:41.285669 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 12 18:20:41.285693 kernel: BTRFS info (device nvme0n1p6): has skinny extents Apr 12 18:20:41.295078 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 12 18:20:41.300371 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:20:41.322325 ignition[1204]: INFO : Ignition 2.14.0 Apr 12 18:20:41.324360 ignition[1204]: INFO : Stage: files Apr 12 18:20:41.326167 ignition[1204]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:20:41.328848 ignition[1204]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:20:41.345443 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:20:41.348456 ignition[1204]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:20:41.351624 ignition[1204]: INFO : PUT result: OK Apr 12 18:20:41.359508 ignition[1204]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:20:41.364129 ignition[1204]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:20:41.367219 ignition[1204]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:20:41.414711 ignition[1204]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:20:41.417680 ignition[1204]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:20:41.421796 unknown[1204]: wrote ssh authorized keys file for user: core Apr 12 18:20:41.424302 ignition[1204]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:20:41.428544 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:20:41.432671 ignition[1204]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Apr 12 18:20:41.777516 ignition[1204]: INFO : GET result: OK Apr 12 18:20:42.429796 ignition[1204]: DEBUG : file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Apr 12 18:20:42.435278 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Apr 12 18:20:42.435278 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:20:42.435278 ignition[1204]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 12 18:20:42.478343 ignition[1204]: INFO : GET result: OK Apr 12 18:20:42.616568 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 12 18:20:42.620992 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:20:42.620992 ignition[1204]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Apr 12 18:20:42.891585 ignition[1204]: INFO : GET result: OK Apr 12 18:20:43.173338 ignition[1204]: DEBUG : file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Apr 12 18:20:43.178328 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Apr 12 18:20:43.187630 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:20:43.191360 ignition[1204]: INFO : GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubectl: attempt #1 Apr 12 18:20:43.404467 ignition[1204]: INFO : GET result: OK Apr 12 18:20:49.374276 ignition[1204]: DEBUG : file matches expected sum of: b303598f3a65bbc366a7bfb4632d3b5cdd2d41b8a7973de80a99f8b1bb058299b57dc39b17a53eb7a54f1a0479ae4e2093fec675f1baff4613e14e0ed9d65c21 Apr 12 18:20:49.380970 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:20:49.380970 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:20:49.380970 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:20:49.380970 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Apr 12 18:20:49.380970 ignition[1204]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:20:49.413287 ignition[1204]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem291519330" Apr 12 18:20:49.413287 ignition[1204]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem291519330": device or resource busy Apr 12 18:20:49.413287 ignition[1204]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem291519330", trying btrfs: device or resource busy Apr 12 18:20:49.413287 ignition[1204]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem291519330" Apr 12 18:20:49.430799 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1209) Apr 12 18:20:49.430850 ignition[1204]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem291519330" Apr 12 18:20:49.430850 ignition[1204]: INFO : op(3): [started] unmounting "/mnt/oem291519330" Apr 12 18:20:49.430850 ignition[1204]: INFO : op(3): [finished] unmounting "/mnt/oem291519330" Apr 12 18:20:49.430850 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Apr 12 18:20:49.430850 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:20:49.430850 ignition[1204]: INFO : GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubeadm: attempt #1 Apr 12 18:20:49.627496 ignition[1204]: INFO : GET result: OK Apr 12 18:20:56.249385 ignition[1204]: DEBUG : file matches expected sum of: 3e6beeb7794aa002604f0be43af0255e707846760508ebe98006ec72ae8d7a7cf2c14fd52bbcc5084f0e9366b992dc64341b1da646f1ce6e937fb762f880dc15 Apr 12 18:20:56.254441 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:20:56.254441 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:20:56.254441 ignition[1204]: INFO : GET https://dl.k8s.io/release/v1.29.2/bin/linux/arm64/kubelet: attempt #1 Apr 12 18:20:56.507638 ignition[1204]: INFO : GET result: OK Apr 12 18:21:11.562613 ignition[1204]: DEBUG : file matches expected sum of: ded47d757fac0279b1b784756fb54b3a5cb0180ce45833838b00d6d7c87578a985e4627503dd7ff734e5f577cf4752ae7daaa2b68e5934fd4617ea15e995f91b Apr 12 18:21:11.567852 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:21:11.567852 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:21:11.567852 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:21:11.567852 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:21:11.567852 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:21:11.567852 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:21:11.567852 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:21:11.567852 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:21:11.567852 ignition[1204]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 12 18:21:11.860136 ignition[1204]: INFO : GET result: OK Apr 12 18:21:12.014406 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:21:12.018263 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:21:12.022527 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:21:12.026181 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:21:12.026181 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:21:12.026181 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:21:12.026181 ignition[1204]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:21:12.053692 ignition[1204]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1883659244" Apr 12 18:21:12.056713 ignition[1204]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1883659244": device or resource busy Apr 12 18:21:12.056713 ignition[1204]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1883659244", trying btrfs: device or resource busy Apr 12 18:21:12.056713 ignition[1204]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1883659244" Apr 12 18:21:12.066945 ignition[1204]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1883659244" Apr 12 18:21:12.066945 ignition[1204]: INFO : op(6): [started] unmounting "/mnt/oem1883659244" Apr 12 18:21:12.066945 ignition[1204]: INFO : op(6): [finished] unmounting "/mnt/oem1883659244" Apr 12 18:21:12.066945 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:21:12.066945 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Apr 12 18:21:12.066945 ignition[1204]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:21:12.098862 ignition[1204]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3496388372" Apr 12 18:21:12.098862 ignition[1204]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3496388372": device or resource busy Apr 12 18:21:12.098862 ignition[1204]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3496388372", trying btrfs: device or resource busy Apr 12 18:21:12.098862 ignition[1204]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3496388372" Apr 12 18:21:12.098862 ignition[1204]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3496388372" Apr 12 18:21:12.098862 ignition[1204]: INFO : op(9): [started] unmounting "/mnt/oem3496388372" Apr 12 18:21:12.098862 ignition[1204]: INFO : op(9): [finished] unmounting "/mnt/oem3496388372" Apr 12 18:21:12.098862 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Apr 12 18:21:12.098862 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Apr 12 18:21:12.098862 ignition[1204]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:21:12.144121 ignition[1204]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1471608365" Apr 12 18:21:12.144121 ignition[1204]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1471608365": device or resource busy Apr 12 18:21:12.144121 ignition[1204]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1471608365", trying btrfs: device or resource busy Apr 12 18:21:12.144121 ignition[1204]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1471608365" Apr 12 18:21:12.144121 ignition[1204]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1471608365" Apr 12 18:21:12.160849 ignition[1204]: INFO : op(c): [started] unmounting "/mnt/oem1471608365" Apr 12 18:21:12.160849 ignition[1204]: INFO : op(c): [finished] unmounting "/mnt/oem1471608365" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(14): [started] processing unit "amazon-ssm-agent.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(14): op(15): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(14): op(15): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(14): [finished] processing unit "amazon-ssm-agent.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(16): [started] processing unit "nvidia.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(16): [finished] processing unit "nvidia.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(17): [started] processing unit "coreos-metadata-sshkeys@.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(17): [finished] processing unit "coreos-metadata-sshkeys@.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(18): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:21:12.160849 ignition[1204]: INFO : files: op(18): op(19): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(18): op(19): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(18): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1e): [started] setting preset to enabled for "amazon-ssm-agent.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1e): [finished] setting preset to enabled for "amazon-ssm-agent.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1f): [started] setting preset to enabled for "nvidia.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(1f): [finished] setting preset to enabled for "nvidia.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(20): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(20): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:21:12.206432 ignition[1204]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:21:12.299309 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 12 18:21:12.299353 kernel: audit: type=1130 audit(1712946072.234:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.232650 systemd[1]: Finished ignition-files.service. Apr 12 18:21:12.302553 ignition[1204]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:21:12.302553 ignition[1204]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:21:12.302553 ignition[1204]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:21:12.302553 ignition[1204]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:21:12.302553 ignition[1204]: INFO : files: files passed Apr 12 18:21:12.302553 ignition[1204]: INFO : Ignition finished successfully Apr 12 18:21:12.246158 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:21:12.270904 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:21:12.274384 systemd[1]: Starting ignition-quench.service... Apr 12 18:21:12.325506 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:21:12.329348 systemd[1]: Finished ignition-quench.service. Apr 12 18:21:12.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.347063 kernel: audit: type=1130 audit(1712946072.331:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.347149 kernel: audit: type=1131 audit(1712946072.337:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.349907 initrd-setup-root-after-ignition[1229]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:21:12.354642 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:21:12.357502 systemd[1]: Reached target ignition-complete.target. Apr 12 18:21:12.374467 kernel: audit: type=1130 audit(1712946072.355:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.366570 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:21:12.402621 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:21:12.404775 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:21:12.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.418100 kernel: audit: type=1130 audit(1712946072.408:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.418208 systemd[1]: Reached target initrd-fs.target. Apr 12 18:21:12.428964 kernel: audit: type=1131 audit(1712946072.416:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.428944 systemd[1]: Reached target initrd.target. Apr 12 18:21:12.432207 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:21:12.436740 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:21:12.464722 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:21:12.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.472359 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:21:12.481383 kernel: audit: type=1130 audit(1712946072.469:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.496511 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:21:12.500498 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:21:12.504665 systemd[1]: Stopped target timers.target. Apr 12 18:21:12.508200 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:21:12.508523 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:21:12.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.514372 systemd[1]: Stopped target initrd.target. Apr 12 18:21:12.530933 kernel: audit: type=1131 audit(1712946072.513:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.523373 systemd[1]: Stopped target basic.target. Apr 12 18:21:12.525391 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:21:12.527603 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:21:12.530910 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:21:12.538219 systemd[1]: Stopped target remote-fs.target. Apr 12 18:21:12.543604 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:21:12.547548 systemd[1]: Stopped target sysinit.target. Apr 12 18:21:12.555076 systemd[1]: Stopped target local-fs.target. Apr 12 18:21:12.558575 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:21:12.562128 systemd[1]: Stopped target swap.target. Apr 12 18:21:12.565275 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:21:12.565577 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:21:12.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.571450 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:21:12.581732 kernel: audit: type=1131 audit(1712946072.570:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.581919 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:21:12.582856 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:21:12.607755 kernel: audit: type=1131 audit(1712946072.587:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.588847 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:21:12.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.589223 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:21:12.599105 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:21:12.599415 systemd[1]: Stopped ignition-files.service. Apr 12 18:21:12.613966 systemd[1]: Stopping ignition-mount.service... Apr 12 18:21:12.623890 systemd[1]: Stopping iscsiuio.service... Apr 12 18:21:12.627221 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:21:12.628805 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:21:12.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.641335 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:21:12.655829 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:21:12.658772 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:21:12.663412 ignition[1242]: INFO : Ignition 2.14.0 Apr 12 18:21:12.663412 ignition[1242]: INFO : Stage: umount Apr 12 18:21:12.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.673496 ignition[1242]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:21:12.673496 ignition[1242]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:21:12.666087 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:21:12.666451 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:21:12.681510 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:21:12.684547 systemd[1]: Stopped iscsiuio.service. Apr 12 18:21:12.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.701116 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:21:12.703911 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:21:12.706206 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:21:12.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.711402 ignition[1242]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:21:12.714048 ignition[1242]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:21:12.717303 ignition[1242]: INFO : PUT result: OK Apr 12 18:21:12.725530 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:21:12.725790 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:21:12.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.733677 ignition[1242]: INFO : umount: umount passed Apr 12 18:21:12.735719 ignition[1242]: INFO : Ignition finished successfully Apr 12 18:21:12.739350 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:21:12.741415 systemd[1]: Stopped ignition-mount.service. Apr 12 18:21:12.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.744898 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:21:12.745060 systemd[1]: Stopped ignition-disks.service. Apr 12 18:21:12.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.750255 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:21:12.750376 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:21:12.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.755650 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 12 18:21:12.755804 systemd[1]: Stopped ignition-fetch.service. Apr 12 18:21:12.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.761066 systemd[1]: Stopped target network.target. Apr 12 18:21:12.764291 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:21:12.764447 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:21:12.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.770362 systemd[1]: Stopped target paths.target. Apr 12 18:21:12.773459 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:21:12.781164 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:21:12.784904 systemd[1]: Stopped target slices.target. Apr 12 18:21:12.787954 systemd[1]: Stopped target sockets.target. Apr 12 18:21:12.791183 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:21:12.791270 systemd[1]: Closed iscsid.socket. Apr 12 18:21:12.795882 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:21:12.796019 systemd[1]: Closed iscsiuio.socket. Apr 12 18:21:12.799270 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:21:12.802819 systemd[1]: Stopped ignition-setup.service. Apr 12 18:21:12.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.806257 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:21:12.806392 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:21:12.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.812255 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:21:12.815781 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:21:12.819133 systemd-networkd[1049]: eth0: DHCPv6 lease lost Apr 12 18:21:12.822808 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:21:12.825454 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:21:12.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.829489 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:21:12.833000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:21:12.829631 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:21:12.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.835596 systemd[1]: Stopping network-cleanup.service... Apr 12 18:21:12.844224 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:21:12.844378 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:21:12.846552 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:21:12.846676 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:21:12.851852 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:21:12.851966 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:21:12.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.865529 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:21:12.872485 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:21:12.874599 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:21:12.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.874829 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:21:12.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.879000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:21:12.878191 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:21:12.878530 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:21:12.883318 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:21:12.883456 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:21:12.889284 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:21:12.889377 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:21:12.902371 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:21:12.904730 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:21:12.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.908211 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:21:12.908337 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:21:12.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.913390 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:21:12.913507 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:21:12.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.920778 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:21:12.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.928903 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:21:12.929100 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:21:12.936721 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:21:12.938609 systemd[1]: Stopped network-cleanup.service. Apr 12 18:21:12.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.949672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:21:12.952158 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:21:12.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:12.956268 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:21:12.961369 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:21:12.979464 systemd[1]: Switching root. Apr 12 18:21:13.006018 iscsid[1054]: iscsid shutting down. Apr 12 18:21:13.008262 systemd-journald[268]: Received SIGTERM from PID 1 (systemd). Apr 12 18:21:13.008359 systemd-journald[268]: Journal stopped Apr 12 18:21:17.823432 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:21:17.823928 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:21:17.823989 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:21:17.824775 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:21:17.824848 kernel: SELinux: policy capability open_perms=1 Apr 12 18:21:17.824882 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:21:17.824934 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:21:17.824968 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:21:17.825000 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:21:17.825254 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:21:17.825303 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:21:17.825338 systemd[1]: Successfully loaded SELinux policy in 73.195ms. Apr 12 18:21:17.825407 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.887ms. Apr 12 18:21:17.825443 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:21:17.825478 systemd[1]: Detected virtualization amazon. Apr 12 18:21:17.825511 systemd[1]: Detected architecture arm64. Apr 12 18:21:17.825546 systemd[1]: Detected first boot. Apr 12 18:21:17.825578 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:21:17.825610 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:21:17.825642 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:21:17.825682 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:21:17.825719 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:21:17.825754 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:21:17.825796 kernel: kauditd_printk_skb: 47 callbacks suppressed Apr 12 18:21:17.825825 kernel: audit: type=1334 audit(1712946077.334:85): prog-id=12 op=LOAD Apr 12 18:21:17.825859 kernel: audit: type=1334 audit(1712946077.334:86): prog-id=3 op=UNLOAD Apr 12 18:21:17.825893 kernel: audit: type=1334 audit(1712946077.335:87): prog-id=13 op=LOAD Apr 12 18:21:17.825929 kernel: audit: type=1334 audit(1712946077.337:88): prog-id=14 op=LOAD Apr 12 18:21:17.825961 kernel: audit: type=1334 audit(1712946077.337:89): prog-id=4 op=UNLOAD Apr 12 18:21:17.825993 kernel: audit: type=1334 audit(1712946077.337:90): prog-id=5 op=UNLOAD Apr 12 18:21:17.826022 kernel: audit: type=1334 audit(1712946077.339:91): prog-id=15 op=LOAD Apr 12 18:21:17.826120 kernel: audit: type=1334 audit(1712946077.339:92): prog-id=12 op=UNLOAD Apr 12 18:21:17.826160 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:21:17.826195 kernel: audit: type=1334 audit(1712946077.342:93): prog-id=16 op=LOAD Apr 12 18:21:17.826227 kernel: audit: type=1334 audit(1712946077.344:94): prog-id=17 op=LOAD Apr 12 18:21:17.826259 systemd[1]: Stopped iscsid.service. Apr 12 18:21:17.829645 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:21:17.829726 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:21:17.829760 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:21:17.829795 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:21:17.829828 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:21:17.829866 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Apr 12 18:21:17.829898 systemd[1]: Created slice system-getty.slice. Apr 12 18:21:17.829933 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:21:17.829974 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:21:17.830009 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:21:17.830104 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:21:17.830142 systemd[1]: Created slice user.slice. Apr 12 18:21:17.830176 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:21:17.830207 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:21:17.830240 systemd[1]: Set up automount boot.automount. Apr 12 18:21:17.830272 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:21:17.830302 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:21:17.830339 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:21:17.830371 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:21:17.830401 systemd[1]: Reached target integritysetup.target. Apr 12 18:21:17.830432 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:21:17.830464 systemd[1]: Reached target remote-fs.target. Apr 12 18:21:17.830498 systemd[1]: Reached target slices.target. Apr 12 18:21:17.830528 systemd[1]: Reached target swap.target. Apr 12 18:21:17.830559 systemd[1]: Reached target torcx.target. Apr 12 18:21:17.830603 systemd[1]: Reached target veritysetup.target. Apr 12 18:21:17.830641 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:21:17.830674 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:21:17.830705 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:21:17.830743 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:21:17.830775 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:21:17.830806 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:21:17.830837 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:21:17.830871 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:21:17.830901 systemd[1]: Mounting media.mount... Apr 12 18:21:17.830934 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:21:17.830969 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:21:17.831002 systemd[1]: Mounting tmp.mount... Apr 12 18:21:17.831066 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:21:17.831102 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:21:17.831133 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:21:17.831164 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:21:17.831195 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:21:17.831229 systemd[1]: Starting modprobe@drm.service... Apr 12 18:21:17.831260 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:21:17.831297 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:21:17.831329 systemd[1]: Starting modprobe@loop.service... Apr 12 18:21:17.831361 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:21:17.831415 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:21:17.831453 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:21:17.831496 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:21:17.831528 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:21:17.831559 systemd[1]: Stopped systemd-journald.service. Apr 12 18:21:17.831599 systemd[1]: Starting systemd-journald.service... Apr 12 18:21:17.831630 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:21:17.831662 kernel: loop: module loaded Apr 12 18:21:17.831693 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:21:17.831724 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:21:17.831758 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:21:17.831791 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:21:17.831822 systemd[1]: Stopped verity-setup.service. Apr 12 18:21:17.831853 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:21:17.831887 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:21:17.831928 systemd[1]: Mounted media.mount. Apr 12 18:21:17.831959 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:21:17.831992 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:21:17.832023 systemd[1]: Mounted tmp.mount. Apr 12 18:21:17.832093 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:21:17.832139 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:21:17.832175 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:21:17.832208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:21:17.832252 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:21:17.832290 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:21:17.832322 systemd[1]: Finished modprobe@drm.service. Apr 12 18:21:17.832353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:21:17.832386 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:21:17.832419 kernel: fuse: init (API version 7.34) Apr 12 18:21:17.832449 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:21:17.832482 systemd[1]: Finished modprobe@loop.service. Apr 12 18:21:17.832518 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:21:17.832551 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:21:17.832586 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:21:17.832765 systemd-journald[1361]: Journal started Apr 12 18:21:17.832938 systemd-journald[1361]: Runtime Journal (/run/log/journal/ec27e25a5475e0cc6544825f024b6706) is 8.0M, max 75.4M, 67.4M free. Apr 12 18:21:17.833230 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:21:13.218000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:21:13.322000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:21:13.322000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:21:13.322000 audit: BPF prog-id=10 op=LOAD Apr 12 18:21:13.322000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:21:13.322000 audit: BPF prog-id=11 op=LOAD Apr 12 18:21:13.322000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:21:13.468000 audit[1275]: AVC avc: denied { associate } for pid=1275 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:21:13.468000 audit[1275]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014d8ac a1=40000d0de0 a2=40000d70c0 a3=32 items=0 ppid=1258 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:21:13.468000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:21:13.472000 audit[1275]: AVC avc: denied { associate } for pid=1275 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:21:13.472000 audit[1275]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400014d989 a2=1ed a3=0 items=2 ppid=1258 pid=1275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:21:13.472000 audit: CWD cwd="/" Apr 12 18:21:13.472000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:21:13.472000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:21:13.472000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:21:17.334000 audit: BPF prog-id=12 op=LOAD Apr 12 18:21:17.334000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:21:17.335000 audit: BPF prog-id=13 op=LOAD Apr 12 18:21:17.337000 audit: BPF prog-id=14 op=LOAD Apr 12 18:21:17.337000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:21:17.337000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:21:17.339000 audit: BPF prog-id=15 op=LOAD Apr 12 18:21:17.339000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:21:17.342000 audit: BPF prog-id=16 op=LOAD Apr 12 18:21:17.344000 audit: BPF prog-id=17 op=LOAD Apr 12 18:21:17.344000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:21:17.840943 systemd[1]: Started systemd-journald.service. Apr 12 18:21:17.344000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:21:17.349000 audit: BPF prog-id=18 op=LOAD Apr 12 18:21:17.349000 audit: BPF prog-id=15 op=UNLOAD Apr 12 18:21:17.352000 audit: BPF prog-id=19 op=LOAD Apr 12 18:21:17.354000 audit: BPF prog-id=20 op=LOAD Apr 12 18:21:17.354000 audit: BPF prog-id=16 op=UNLOAD Apr 12 18:21:17.354000 audit: BPF prog-id=17 op=UNLOAD Apr 12 18:21:17.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.368000 audit: BPF prog-id=18 op=UNLOAD Apr 12 18:21:17.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.675000 audit: BPF prog-id=21 op=LOAD Apr 12 18:21:17.676000 audit: BPF prog-id=22 op=LOAD Apr 12 18:21:17.676000 audit: BPF prog-id=23 op=LOAD Apr 12 18:21:17.676000 audit: BPF prog-id=19 op=UNLOAD Apr 12 18:21:17.676000 audit: BPF prog-id=20 op=UNLOAD Apr 12 18:21:17.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.812000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:21:17.812000 audit[1361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffed450d80 a2=4000 a3=1 items=0 ppid=1 pid=1361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:21:17.812000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:21:17.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.332694 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:21:13.463142 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:21:17.356822 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:21:13.464222 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:21:17.842293 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:21:13.464275 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:21:17.845092 systemd[1]: Reached target network-pre.target. Apr 12 18:21:13.464346 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:21:17.852258 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:21:13.464372 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:21:17.857217 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:21:13.464444 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:21:17.859286 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:21:13.464477 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:21:17.863582 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:21:13.464943 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:21:17.868179 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:21:13.465072 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:21:17.870183 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:21:13.465113 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:21:17.875957 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:21:17.933761 systemd-journald[1361]: Time spent on flushing to /var/log/journal/ec27e25a5475e0cc6544825f024b6706 is 82.997ms for 1162 entries. Apr 12 18:21:17.933761 systemd-journald[1361]: System Journal (/var/log/journal/ec27e25a5475e0cc6544825f024b6706) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:21:18.090612 systemd-journald[1361]: Received client request to flush runtime journal. Apr 12 18:21:17.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:17.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:18.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:13.467545 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:21:17.878438 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:21:13.467976 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:21:17.882495 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:21:13.468086 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:21:17.893572 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:21:13.468135 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:21:17.896261 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:21:13.468192 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:21:17.970642 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:21:13.468234 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:13Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:21:17.986279 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:21:16.418110 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:16Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:21:17.989317 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:21:16.418771 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:16Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:21:17.994145 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:21:16.419155 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:16Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:21:17.999480 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:21:16.419738 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:16Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:21:18.078165 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:21:18.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:16.419899 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:16Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:21:18.099364 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:21:16.420145 /usr/lib/systemd/system-generators/torcx-generator[1275]: time="2024-04-12T18:21:16Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:21:18.113923 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:21:18.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:18.118891 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:21:18.138761 udevadm[1396]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 12 18:21:18.896809 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:21:18.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:18.899000 audit: BPF prog-id=24 op=LOAD Apr 12 18:21:18.899000 audit: BPF prog-id=25 op=LOAD Apr 12 18:21:18.899000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:21:18.899000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:21:18.901905 systemd[1]: Starting systemd-udevd.service... Apr 12 18:21:18.943638 systemd-udevd[1397]: Using default interface naming scheme 'v252'. Apr 12 18:21:19.005603 systemd[1]: Started systemd-udevd.service. Apr 12 18:21:19.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.008000 audit: BPF prog-id=26 op=LOAD Apr 12 18:21:19.011547 systemd[1]: Starting systemd-networkd.service... Apr 12 18:21:19.021000 audit: BPF prog-id=27 op=LOAD Apr 12 18:21:19.022000 audit: BPF prog-id=28 op=LOAD Apr 12 18:21:19.022000 audit: BPF prog-id=29 op=LOAD Apr 12 18:21:19.025815 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:21:19.114807 systemd[1]: Started systemd-userdbd.service. Apr 12 18:21:19.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.125667 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Apr 12 18:21:19.137647 (udev-worker)[1399]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:21:19.283623 systemd-networkd[1403]: lo: Link UP Apr 12 18:21:19.284219 systemd-networkd[1403]: lo: Gained carrier Apr 12 18:21:19.285438 systemd-networkd[1403]: Enumeration completed Apr 12 18:21:19.285827 systemd[1]: Started systemd-networkd.service. Apr 12 18:21:19.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.290264 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:21:19.294389 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:21:19.303705 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:21:19.302863 systemd-networkd[1403]: eth0: Link UP Apr 12 18:21:19.303277 systemd-networkd[1403]: eth0: Gained carrier Apr 12 18:21:19.316354 systemd-networkd[1403]: eth0: DHCPv4 address 172.31.18.247/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 12 18:21:19.435170 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1415) Apr 12 18:21:19.556245 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:21:19.559104 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:21:19.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.564141 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:21:19.588802 lvm[1510]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:21:19.629915 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:21:19.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.632457 systemd[1]: Reached target cryptsetup.target. Apr 12 18:21:19.637146 systemd[1]: Starting lvm2-activation.service... Apr 12 18:21:19.646279 lvm[1511]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:21:19.685910 systemd[1]: Finished lvm2-activation.service. Apr 12 18:21:19.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.688319 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:21:19.690428 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:21:19.690487 systemd[1]: Reached target local-fs.target. Apr 12 18:21:19.692537 systemd[1]: Reached target machines.target. Apr 12 18:21:19.697415 systemd[1]: Starting ldconfig.service... Apr 12 18:21:19.700877 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:21:19.701283 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:21:19.704089 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:21:19.708650 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:21:19.718423 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:21:19.720646 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:21:19.720827 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:21:19.723771 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:21:19.748384 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1513 (bootctl) Apr 12 18:21:19.751019 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:21:19.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.782236 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:21:19.787639 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:21:19.800433 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:21:19.821767 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:21:19.858410 systemd-fsck[1522]: fsck.fat 4.2 (2021-01-31) Apr 12 18:21:19.858410 systemd-fsck[1522]: /dev/nvme0n1p1: 236 files, 117047/258078 clusters Apr 12 18:21:19.866344 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:21:19.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.871598 systemd[1]: Mounting boot.mount... Apr 12 18:21:19.911446 systemd[1]: Mounted boot.mount. Apr 12 18:21:19.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:19.952218 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:21:20.286815 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:21:20.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:20.291788 systemd[1]: Starting audit-rules.service... Apr 12 18:21:20.297572 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:21:20.303271 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:21:20.306000 audit: BPF prog-id=30 op=LOAD Apr 12 18:21:20.312492 systemd[1]: Starting systemd-resolved.service... Apr 12 18:21:20.315000 audit: BPF prog-id=31 op=LOAD Apr 12 18:21:20.318289 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:21:20.324432 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:21:20.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:20.328656 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:21:20.333297 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:21:20.370000 audit[1541]: SYSTEM_BOOT pid=1541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:21:20.377264 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:21:20.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:20.507456 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:21:20.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:21:20.522859 ldconfig[1512]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:21:20.523000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:21:20.523000 audit[1556]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc2557da0 a2=420 a3=0 items=0 ppid=1536 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:21:20.523000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:21:20.525527 augenrules[1556]: No rules Apr 12 18:21:20.527331 systemd[1]: Finished audit-rules.service. Apr 12 18:21:20.534095 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:21:20.536092 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:21:20.541852 systemd[1]: Finished ldconfig.service. Apr 12 18:21:20.546492 systemd[1]: Starting systemd-update-done.service... Apr 12 18:21:20.557183 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:21:20.559446 systemd[1]: Reached target time-set.target. Apr 12 18:21:20.567587 systemd-resolved[1539]: Positive Trust Anchors: Apr 12 18:21:20.567708 systemd-resolved[1539]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:21:20.567761 systemd-resolved[1539]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:21:20.571021 systemd[1]: Finished systemd-update-done.service. Apr 12 18:21:20.598368 systemd-resolved[1539]: Defaulting to hostname 'linux'. Apr 12 18:21:20.602023 systemd[1]: Started systemd-resolved.service. Apr 12 18:21:20.604208 systemd[1]: Reached target network.target. Apr 12 18:21:20.606016 systemd[1]: Reached target nss-lookup.target. Apr 12 18:21:20.607952 systemd[1]: Reached target sysinit.target. Apr 12 18:21:20.610113 systemd[1]: Started motdgen.path. Apr 12 18:21:20.611884 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:21:20.614741 systemd[1]: Started logrotate.timer. Apr 12 18:21:20.616735 systemd[1]: Started mdadm.timer. Apr 12 18:21:20.618379 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:21:20.620390 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:21:20.620458 systemd[1]: Reached target paths.target. Apr 12 18:21:20.622187 systemd[1]: Reached target timers.target. Apr 12 18:21:20.631168 systemd[1]: Listening on dbus.socket. Apr 12 18:21:20.635415 systemd[1]: Starting docker.socket... Apr 12 18:21:20.642866 systemd[1]: Listening on sshd.socket. Apr 12 18:21:20.644819 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:21:20.645876 systemd[1]: Listening on docker.socket. Apr 12 18:21:20.647926 systemd[1]: Reached target sockets.target. Apr 12 18:21:20.649751 systemd[1]: Reached target basic.target. Apr 12 18:21:20.651596 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:21:20.651680 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:21:20.654167 systemd[1]: Starting containerd.service... Apr 12 18:21:20.660065 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Apr 12 18:21:20.666018 systemd[1]: Starting dbus.service... Apr 12 18:21:20.673239 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:21:20.679299 systemd[1]: Starting extend-filesystems.service... Apr 12 18:21:20.697242 jq[1569]: false Apr 12 18:21:20.681635 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:21:20.685861 systemd[1]: Starting motdgen.service... Apr 12 18:21:20.693349 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:21:20.697925 systemd[1]: Starting prepare-critools.service... Apr 12 18:21:20.702724 systemd[1]: Starting prepare-helm.service... Apr 12 18:21:20.709294 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:21:20.716174 systemd[1]: Starting sshd-keygen.service... Apr 12 18:21:20.725192 systemd[1]: Starting systemd-logind.service... Apr 12 18:21:20.729314 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:21:20.729454 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:21:20.764215 jq[1584]: true Apr 12 18:21:20.730453 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:21:20.732307 systemd[1]: Starting update-engine.service... Apr 12 18:21:20.738419 systemd-timesyncd[1540]: Contacted time server 50.205.57.38:123 (0.flatcar.pool.ntp.org). Apr 12 18:21:20.738538 systemd-timesyncd[1540]: Initial clock synchronization to Fri 2024-04-12 18:21:20.366488 UTC. Apr 12 18:21:20.741366 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:21:20.747798 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:21:20.748304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:21:20.775362 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:21:20.775812 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:21:20.806499 jq[1587]: true Apr 12 18:21:20.815067 tar[1592]: ./ Apr 12 18:21:20.815067 tar[1592]: ./loopback Apr 12 18:21:20.827488 tar[1588]: crictl Apr 12 18:21:20.832549 tar[1589]: linux-arm64/helm Apr 12 18:21:20.849099 extend-filesystems[1570]: Found nvme0n1 Apr 12 18:21:20.853126 extend-filesystems[1570]: Found nvme0n1p1 Apr 12 18:21:20.853126 extend-filesystems[1570]: Found nvme0n1p2 Apr 12 18:21:20.853126 extend-filesystems[1570]: Found nvme0n1p3 Apr 12 18:21:20.853126 extend-filesystems[1570]: Found usr Apr 12 18:21:20.853126 extend-filesystems[1570]: Found nvme0n1p4 Apr 12 18:21:20.853126 extend-filesystems[1570]: Found nvme0n1p6 Apr 12 18:21:20.853126 extend-filesystems[1570]: Found nvme0n1p7 Apr 12 18:21:20.879617 extend-filesystems[1570]: Found nvme0n1p9 Apr 12 18:21:20.879617 extend-filesystems[1570]: Checking size of /dev/nvme0n1p9 Apr 12 18:21:20.915914 dbus-daemon[1568]: [system] SELinux support is enabled Apr 12 18:21:20.920375 systemd[1]: Started dbus.service. Apr 12 18:21:20.925717 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:21:20.925793 systemd[1]: Reached target system-config.target. Apr 12 18:21:20.927881 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:21:20.927924 systemd[1]: Reached target user-config.target. Apr 12 18:21:20.941008 dbus-daemon[1568]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1403 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 12 18:21:20.945901 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:21:20.946300 systemd[1]: Finished motdgen.service. Apr 12 18:21:20.952497 dbus-daemon[1568]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 12 18:21:20.965867 systemd[1]: Starting systemd-hostnamed.service... Apr 12 18:21:20.994411 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:21:20.994583 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:21:21.021837 extend-filesystems[1570]: Resized partition /dev/nvme0n1p9 Apr 12 18:21:21.026699 extend-filesystems[1628]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:21:21.066072 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 12 18:21:21.069153 update_engine[1583]: I0412 18:21:21.064389 1583 main.cc:92] Flatcar Update Engine starting Apr 12 18:21:21.081139 systemd[1]: Started update-engine.service. Apr 12 18:21:21.086344 systemd[1]: Started locksmithd.service. Apr 12 18:21:21.088010 update_engine[1583]: I0412 18:21:21.087797 1583 update_check_scheduler.cc:74] Next update check in 9m7s Apr 12 18:21:21.154089 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 12 18:21:21.174659 systemd-logind[1581]: Watching system buttons on /dev/input/event0 (Power Button) Apr 12 18:21:21.177787 systemd-logind[1581]: New seat seat0. Apr 12 18:21:21.179863 extend-filesystems[1628]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 12 18:21:21.179863 extend-filesystems[1628]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:21:21.179863 extend-filesystems[1628]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 12 18:21:21.217785 extend-filesystems[1570]: Resized filesystem in /dev/nvme0n1p9 Apr 12 18:21:21.180656 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:21:21.181049 systemd[1]: Finished extend-filesystems.service. Apr 12 18:21:21.199393 systemd[1]: Started systemd-logind.service. Apr 12 18:21:21.271732 tar[1592]: ./bandwidth Apr 12 18:21:21.275623 env[1593]: time="2024-04-12T18:21:21.275511730Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:21:21.329217 systemd-networkd[1403]: eth0: Gained IPv6LL Apr 12 18:21:21.333919 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:21:21.336649 systemd[1]: Reached target network-online.target. Apr 12 18:21:21.341422 systemd[1]: Started amazon-ssm-agent.service. Apr 12 18:21:21.347436 systemd[1]: Started nvidia.service. Apr 12 18:21:21.661174 tar[1592]: ./ptp Apr 12 18:21:21.705588 env[1593]: time="2024-04-12T18:21:21.705490878Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:21:21.705851 env[1593]: time="2024-04-12T18:21:21.705794961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:21:21.718601 env[1593]: time="2024-04-12T18:21:21.718513369Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:21:21.718601 env[1593]: time="2024-04-12T18:21:21.718590434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:21:21.719260 env[1593]: time="2024-04-12T18:21:21.719186310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:21:21.719260 env[1593]: time="2024-04-12T18:21:21.719252642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:21:21.719449 env[1593]: time="2024-04-12T18:21:21.719289647Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:21:21.719449 env[1593]: time="2024-04-12T18:21:21.719316628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:21:21.719579 env[1593]: time="2024-04-12T18:21:21.719545088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:21:21.720209 env[1593]: time="2024-04-12T18:21:21.720136296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:21:21.720595 env[1593]: time="2024-04-12T18:21:21.720515842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:21:21.720595 env[1593]: time="2024-04-12T18:21:21.720581864Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:21:21.720809 env[1593]: time="2024-04-12T18:21:21.720757277Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:21:21.720931 env[1593]: time="2024-04-12T18:21:21.720803504Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:21:21.721738 dbus-daemon[1568]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 12 18:21:21.722113 systemd[1]: Started systemd-hostnamed.service. Apr 12 18:21:21.727433 dbus-daemon[1568]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1624 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 12 18:21:21.732973 systemd[1]: Starting polkit.service... Apr 12 18:21:21.757414 env[1593]: time="2024-04-12T18:21:21.757318741Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:21:21.757414 env[1593]: time="2024-04-12T18:21:21.757408198Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:21:21.757646 env[1593]: time="2024-04-12T18:21:21.757443784Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:21:21.757646 env[1593]: time="2024-04-12T18:21:21.757519956Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.757646 env[1593]: time="2024-04-12T18:21:21.757556595Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.757646 env[1593]: time="2024-04-12T18:21:21.757589767Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.757646 env[1593]: time="2024-04-12T18:21:21.757621085Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.758352 env[1593]: time="2024-04-12T18:21:21.758267148Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.758352 env[1593]: time="2024-04-12T18:21:21.758339819Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.758561 env[1593]: time="2024-04-12T18:21:21.758379993Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.758561 env[1593]: time="2024-04-12T18:21:21.758414320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.758561 env[1593]: time="2024-04-12T18:21:21.758446462Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:21:21.758741 env[1593]: time="2024-04-12T18:21:21.758713243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:21:21.759012 env[1593]: time="2024-04-12T18:21:21.758941748Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:21:21.759476 env[1593]: time="2024-04-12T18:21:21.759413623Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:21:21.759617 env[1593]: time="2024-04-12T18:21:21.759499910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.759617 env[1593]: time="2024-04-12T18:21:21.759537350Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:21:21.759767 env[1593]: time="2024-04-12T18:21:21.759661844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.759767 env[1593]: time="2024-04-12T18:21:21.759699329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.759767 env[1593]: time="2024-04-12T18:21:21.759731585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.759909 env[1593]: time="2024-04-12T18:21:21.759772629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.759909 env[1593]: time="2024-04-12T18:21:21.759805023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.759909 env[1593]: time="2024-04-12T18:21:21.759837245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.760127 env[1593]: time="2024-04-12T18:21:21.759874524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.760127 env[1593]: time="2024-04-12T18:21:21.759937000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.760127 env[1593]: time="2024-04-12T18:21:21.759974028Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:21:21.766517 env[1593]: time="2024-04-12T18:21:21.766431820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.766517 env[1593]: time="2024-04-12T18:21:21.766509194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.766748 env[1593]: time="2024-04-12T18:21:21.766547057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.766748 env[1593]: time="2024-04-12T18:21:21.766577665Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:21:21.766748 env[1593]: time="2024-04-12T18:21:21.766616444Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:21:21.766748 env[1593]: time="2024-04-12T18:21:21.766644558Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:21:21.766748 env[1593]: time="2024-04-12T18:21:21.766680029Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:21:21.767012 env[1593]: time="2024-04-12T18:21:21.766745960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:21:21.767262 env[1593]: time="2024-04-12T18:21:21.767142510Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:21:21.773372 env[1593]: time="2024-04-12T18:21:21.767267953Z" level=info msg="Connect containerd service" Apr 12 18:21:21.773372 env[1593]: time="2024-04-12T18:21:21.767337271Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:21:21.783369 amazon-ssm-agent[1645]: 2024/04/12 18:21:21 Failed to load instance info from vault. RegistrationKey does not exist. Apr 12 18:21:21.794889 polkitd[1680]: Started polkitd version 121 Apr 12 18:21:21.795435 amazon-ssm-agent[1645]: Initializing new seelog logger Apr 12 18:21:21.795548 amazon-ssm-agent[1645]: New Seelog Logger Creation Complete Apr 12 18:21:21.795855 amazon-ssm-agent[1645]: 2024/04/12 18:21:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 12 18:21:21.795855 amazon-ssm-agent[1645]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 12 18:21:21.796463 env[1593]: time="2024-04-12T18:21:21.796367643Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:21:21.797068 env[1593]: time="2024-04-12T18:21:21.796948644Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:21:21.797227 env[1593]: time="2024-04-12T18:21:21.797124365Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:21:21.797360 systemd[1]: Started containerd.service. Apr 12 18:21:21.801288 env[1593]: time="2024-04-12T18:21:21.801198743Z" level=info msg="Start subscribing containerd event" Apr 12 18:21:21.801497 env[1593]: time="2024-04-12T18:21:21.801307572Z" level=info msg="Start recovering state" Apr 12 18:21:21.801497 env[1593]: time="2024-04-12T18:21:21.801436895Z" level=info msg="Start event monitor" Apr 12 18:21:21.804292 env[1593]: time="2024-04-12T18:21:21.804201708Z" level=info msg="Start snapshots syncer" Apr 12 18:21:21.804292 env[1593]: time="2024-04-12T18:21:21.804288431Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:21:21.804292 env[1593]: time="2024-04-12T18:21:21.804313982Z" level=info msg="Start streaming server" Apr 12 18:21:21.807444 amazon-ssm-agent[1645]: 2024/04/12 18:21:21 processing appconfig overrides Apr 12 18:21:21.841233 env[1593]: time="2024-04-12T18:21:21.841165832Z" level=info msg="containerd successfully booted in 0.604600s" Apr 12 18:21:21.855260 polkitd[1680]: Loading rules from directory /etc/polkit-1/rules.d Apr 12 18:21:21.861269 polkitd[1680]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 12 18:21:21.873394 polkitd[1680]: Finished loading, compiling and executing 2 rules Apr 12 18:21:21.874896 dbus-daemon[1568]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 12 18:21:21.875197 systemd[1]: Started polkit.service. Apr 12 18:21:21.883600 polkitd[1680]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 12 18:21:21.945729 systemd-hostnamed[1624]: Hostname set to (transient) Apr 12 18:21:21.945914 systemd-resolved[1539]: System hostname changed to 'ip-172-31-18-247'. Apr 12 18:21:21.958796 coreos-metadata[1567]: Apr 12 18:21:21.958 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 12 18:21:21.959971 coreos-metadata[1567]: Apr 12 18:21:21.959 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Apr 12 18:21:21.961636 coreos-metadata[1567]: Apr 12 18:21:21.961 INFO Fetch successful Apr 12 18:21:21.962389 coreos-metadata[1567]: Apr 12 18:21:21.961 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 12 18:21:21.962492 systemd[1]: nvidia.service: Deactivated successfully. Apr 12 18:21:21.965163 coreos-metadata[1567]: Apr 12 18:21:21.964 INFO Fetch successful Apr 12 18:21:21.974865 unknown[1567]: wrote ssh authorized keys file for user: core Apr 12 18:21:22.002511 update-ssh-keys[1724]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:21:22.003743 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Apr 12 18:21:22.043540 tar[1592]: ./vlan Apr 12 18:21:22.271203 tar[1592]: ./host-device Apr 12 18:21:22.486420 tar[1592]: ./tuning Apr 12 18:21:22.632708 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Create new startup processor Apr 12 18:21:22.649645 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [LongRunningPluginsManager] registered plugins: {} Apr 12 18:21:22.649893 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Initializing bookkeeping folders Apr 12 18:21:22.650068 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO removing the completed state files Apr 12 18:21:22.650292 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Initializing bookkeeping folders for long running plugins Apr 12 18:21:22.650521 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Apr 12 18:21:22.650693 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Initializing healthcheck folders for long running plugins Apr 12 18:21:22.650870 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Initializing locations for inventory plugin Apr 12 18:21:22.651079 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Initializing default location for custom inventory Apr 12 18:21:22.651267 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Initializing default location for file inventory Apr 12 18:21:22.651421 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Initializing default location for role inventory Apr 12 18:21:22.651580 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Init the cloudwatchlogs publisher Apr 12 18:21:22.651759 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:configureDocker Apr 12 18:21:22.651928 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:refreshAssociation Apr 12 18:21:22.652127 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:downloadContent Apr 12 18:21:22.656183 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:softwareInventory Apr 12 18:21:22.656404 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:runPowerShellScript Apr 12 18:21:22.656542 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:updateSsmAgent Apr 12 18:21:22.656675 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:runDockerAction Apr 12 18:21:22.656812 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:configurePackage Apr 12 18:21:22.656947 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform independent plugin aws:runDocument Apr 12 18:21:22.657131 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Successfully loaded platform dependent plugin aws:runShellScript Apr 12 18:21:22.657299 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Apr 12 18:21:22.657436 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO OS: linux, Arch: arm64 Apr 12 18:21:22.665558 amazon-ssm-agent[1645]: datastore file /var/lib/amazon/ssm/i-03fcd68245f3feafd/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Apr 12 18:21:22.671342 tar[1592]: ./vrf Apr 12 18:21:22.748164 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] Starting document processing engine... Apr 12 18:21:22.798970 tar[1592]: ./sbr Apr 12 18:21:22.843151 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] [EngineProcessor] Starting Apr 12 18:21:22.888439 tar[1592]: ./tap Apr 12 18:21:22.937494 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Apr 12 18:21:23.004925 tar[1592]: ./dhcp Apr 12 18:21:23.032021 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] Starting message polling Apr 12 18:21:23.126763 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] Starting send replies to MDS Apr 12 18:21:23.221771 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [instanceID=i-03fcd68245f3feafd] Starting association polling Apr 12 18:21:23.282688 tar[1592]: ./static Apr 12 18:21:23.316843 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Apr 12 18:21:23.368178 tar[1592]: ./firewall Apr 12 18:21:23.412200 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] [Association] Launching response handler Apr 12 18:21:23.425467 systemd[1]: Finished prepare-critools.service. Apr 12 18:21:23.494277 tar[1592]: ./macvlan Apr 12 18:21:23.507683 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Apr 12 18:21:23.534699 tar[1589]: linux-arm64/LICENSE Apr 12 18:21:23.535326 tar[1589]: linux-arm64/README.md Apr 12 18:21:23.544367 systemd[1]: Finished prepare-helm.service. Apr 12 18:21:23.589730 tar[1592]: ./dummy Apr 12 18:21:23.603425 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Apr 12 18:21:23.654770 tar[1592]: ./bridge Apr 12 18:21:23.699370 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Apr 12 18:21:23.725670 tar[1592]: ./ipvlan Apr 12 18:21:23.791959 tar[1592]: ./portmap Apr 12 18:21:23.795468 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [HealthCheck] HealthCheck reporting agent health. Apr 12 18:21:23.857872 tar[1592]: ./host-local Apr 12 18:21:23.891795 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] Starting session document processing engine... Apr 12 18:21:23.938241 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:21:23.988247 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] [EngineProcessor] Starting Apr 12 18:21:23.990684 locksmithd[1635]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:21:24.085018 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Apr 12 18:21:24.181812 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-03fcd68245f3feafd, requestId: aa611318-bbaa-4daa-8a3e-aea6d6ca5ff8 Apr 12 18:21:24.278915 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [OfflineService] Starting document processing engine... Apr 12 18:21:24.376272 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [OfflineService] [EngineProcessor] Starting Apr 12 18:21:24.473729 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [OfflineService] [EngineProcessor] Initial processing Apr 12 18:21:24.571497 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [OfflineService] Starting message polling Apr 12 18:21:24.669492 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [OfflineService] Starting send replies to MDS Apr 12 18:21:24.767546 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [LongRunningPluginsManager] starting long running plugin manager Apr 12 18:21:24.865828 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Apr 12 18:21:24.964305 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] listening reply. Apr 12 18:21:25.062928 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [StartupProcessor] Executing startup processor tasks Apr 12 18:21:25.161798 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Apr 12 18:21:25.260939 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Apr 12 18:21:25.360139 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.3 Apr 12 18:21:25.459599 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Apr 12 18:21:25.559314 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-03fcd68245f3feafd?role=subscribe&stream=input Apr 12 18:21:25.659095 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-03fcd68245f3feafd?role=subscribe&stream=input Apr 12 18:21:25.759140 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] Starting receiving message from control channel Apr 12 18:21:25.859421 amazon-ssm-agent[1645]: 2024-04-12 18:21:22 INFO [MessageGatewayService] [EngineProcessor] Initial processing Apr 12 18:21:26.732748 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:21:26.776452 systemd[1]: Finished sshd-keygen.service. Apr 12 18:21:26.781958 systemd[1]: Starting issuegen.service... Apr 12 18:21:26.795674 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:21:26.796158 systemd[1]: Finished issuegen.service. Apr 12 18:21:26.801706 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:21:26.818376 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:21:26.823651 systemd[1]: Started getty@tty1.service. Apr 12 18:21:26.828547 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 18:21:26.830903 systemd[1]: Reached target getty.target. Apr 12 18:21:26.832864 systemd[1]: Reached target multi-user.target. Apr 12 18:21:26.837918 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:21:26.854795 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:21:26.855263 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:21:26.857617 systemd[1]: Startup finished in 1.211s (kernel) + 37.538s (initrd) + 13.721s (userspace) = 52.471s. Apr 12 18:21:30.158215 systemd[1]: Created slice system-sshd.slice. Apr 12 18:21:30.160949 systemd[1]: Started sshd@0-172.31.18.247:22-139.178.89.65:57952.service. Apr 12 18:21:30.346752 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 57952 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:21:30.352173 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:21:30.375868 systemd[1]: Created slice user-500.slice. Apr 12 18:21:30.379487 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:21:30.391884 systemd-logind[1581]: New session 1 of user core. Apr 12 18:21:30.403566 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:21:30.408528 systemd[1]: Starting user@500.service... Apr 12 18:21:30.418219 (systemd)[1793]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:21:30.452223 amazon-ssm-agent[1645]: 2024-04-12 18:21:30 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Apr 12 18:21:30.615479 systemd[1793]: Queued start job for default target default.target. Apr 12 18:21:30.616698 systemd[1793]: Reached target paths.target. Apr 12 18:21:30.616756 systemd[1793]: Reached target sockets.target. Apr 12 18:21:30.616790 systemd[1793]: Reached target timers.target. Apr 12 18:21:30.616820 systemd[1793]: Reached target basic.target. Apr 12 18:21:30.616929 systemd[1793]: Reached target default.target. Apr 12 18:21:30.617004 systemd[1793]: Startup finished in 184ms. Apr 12 18:21:30.618320 systemd[1]: Started user@500.service. Apr 12 18:21:30.621056 systemd[1]: Started session-1.scope. Apr 12 18:21:30.772428 systemd[1]: Started sshd@1-172.31.18.247:22-139.178.89.65:57966.service. Apr 12 18:21:30.954329 sshd[1802]: Accepted publickey for core from 139.178.89.65 port 57966 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:21:30.957836 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:21:30.968370 systemd[1]: Started session-2.scope. Apr 12 18:21:30.970175 systemd-logind[1581]: New session 2 of user core. Apr 12 18:21:31.111666 sshd[1802]: pam_unix(sshd:session): session closed for user core Apr 12 18:21:31.117918 systemd-logind[1581]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:21:31.118548 systemd[1]: sshd@1-172.31.18.247:22-139.178.89.65:57966.service: Deactivated successfully. Apr 12 18:21:31.119950 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:21:31.121907 systemd-logind[1581]: Removed session 2. Apr 12 18:21:31.140320 systemd[1]: Started sshd@2-172.31.18.247:22-139.178.89.65:57978.service. Apr 12 18:21:31.311902 sshd[1808]: Accepted publickey for core from 139.178.89.65 port 57978 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:21:31.315715 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:21:31.324731 systemd-logind[1581]: New session 3 of user core. Apr 12 18:21:31.325803 systemd[1]: Started session-3.scope. Apr 12 18:21:31.450460 sshd[1808]: pam_unix(sshd:session): session closed for user core Apr 12 18:21:31.457135 systemd-logind[1581]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:21:31.457551 systemd[1]: sshd@2-172.31.18.247:22-139.178.89.65:57978.service: Deactivated successfully. Apr 12 18:21:31.458951 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:21:31.460707 systemd-logind[1581]: Removed session 3. Apr 12 18:21:31.478227 systemd[1]: Started sshd@3-172.31.18.247:22-139.178.89.65:57994.service. Apr 12 18:21:31.645674 sshd[1814]: Accepted publickey for core from 139.178.89.65 port 57994 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:21:31.649281 sshd[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:21:31.658164 systemd-logind[1581]: New session 4 of user core. Apr 12 18:21:31.659575 systemd[1]: Started session-4.scope. Apr 12 18:21:31.793442 sshd[1814]: pam_unix(sshd:session): session closed for user core Apr 12 18:21:31.800191 systemd[1]: sshd@3-172.31.18.247:22-139.178.89.65:57994.service: Deactivated successfully. Apr 12 18:21:31.801748 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:21:31.804222 systemd-logind[1581]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:21:31.806790 systemd-logind[1581]: Removed session 4. Apr 12 18:21:31.824408 systemd[1]: Started sshd@4-172.31.18.247:22-139.178.89.65:58010.service. Apr 12 18:21:31.995689 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 58010 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:21:31.998589 sshd[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:21:32.009513 systemd[1]: Started session-5.scope. Apr 12 18:21:32.011157 systemd-logind[1581]: New session 5 of user core. Apr 12 18:21:32.142640 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:21:32.144238 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:21:32.883004 systemd[1]: Starting docker.service... Apr 12 18:21:32.968079 env[1838]: time="2024-04-12T18:21:32.967928262Z" level=info msg="Starting up" Apr 12 18:21:32.973341 env[1838]: time="2024-04-12T18:21:32.973282105Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:21:32.973575 env[1838]: time="2024-04-12T18:21:32.973536907Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:21:32.973779 env[1838]: time="2024-04-12T18:21:32.973737514Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:21:32.973924 env[1838]: time="2024-04-12T18:21:32.973891762Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:21:32.977627 env[1838]: time="2024-04-12T18:21:32.977570337Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:21:32.977833 env[1838]: time="2024-04-12T18:21:32.977796919Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:21:32.977978 env[1838]: time="2024-04-12T18:21:32.977942917Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:21:32.978207 env[1838]: time="2024-04-12T18:21:32.978169036Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:21:33.037291 env[1838]: time="2024-04-12T18:21:33.037219274Z" level=info msg="Loading containers: start." Apr 12 18:21:33.265084 kernel: Initializing XFRM netlink socket Apr 12 18:21:33.313084 env[1838]: time="2024-04-12T18:21:33.312988514Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:21:33.315777 (udev-worker)[1848]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:21:33.424403 systemd-networkd[1403]: docker0: Link UP Apr 12 18:21:33.445620 env[1838]: time="2024-04-12T18:21:33.445542001Z" level=info msg="Loading containers: done." Apr 12 18:21:33.475571 env[1838]: time="2024-04-12T18:21:33.475499284Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:21:33.476262 env[1838]: time="2024-04-12T18:21:33.476214971Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:21:33.476768 env[1838]: time="2024-04-12T18:21:33.476723337Z" level=info msg="Daemon has completed initialization" Apr 12 18:21:33.506593 systemd[1]: Started docker.service. Apr 12 18:21:33.516174 env[1838]: time="2024-04-12T18:21:33.516044678Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:21:33.554499 systemd[1]: Reloading. Apr 12 18:21:33.720577 /usr/lib/systemd/system-generators/torcx-generator[1974]: time="2024-04-12T18:21:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:21:33.721492 /usr/lib/systemd/system-generators/torcx-generator[1974]: time="2024-04-12T18:21:33Z" level=info msg="torcx already run" Apr 12 18:21:33.900062 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:21:33.900111 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:21:33.944864 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:21:34.193960 systemd[1]: Started kubelet.service. Apr 12 18:21:34.355419 kubelet[2030]: E0412 18:21:34.355328 2030 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:21:34.360721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:21:34.361128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:21:34.713221 env[1593]: time="2024-04-12T18:21:34.713118558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\"" Apr 12 18:21:35.401410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607436960.mount: Deactivated successfully. Apr 12 18:21:38.612802 env[1593]: time="2024-04-12T18:21:38.612712989Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:38.617090 env[1593]: time="2024-04-12T18:21:38.616986015Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:38.624269 env[1593]: time="2024-04-12T18:21:38.624182828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:38.628679 env[1593]: time="2024-04-12T18:21:38.628605588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:38.630694 env[1593]: time="2024-04-12T18:21:38.630607325Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\" returns image reference \"sha256:2581114f5709d3459ca39f243fd21fde75f2f60d205ffdcd57b4207c33980794\"" Apr 12 18:21:38.649570 env[1593]: time="2024-04-12T18:21:38.649484030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\"" Apr 12 18:21:41.784803 env[1593]: time="2024-04-12T18:21:41.784729930Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:41.788218 env[1593]: time="2024-04-12T18:21:41.788150140Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:41.792162 env[1593]: time="2024-04-12T18:21:41.792094691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:41.796229 env[1593]: time="2024-04-12T18:21:41.796155208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:41.798421 env[1593]: time="2024-04-12T18:21:41.798352124Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\" returns image reference \"sha256:121d70d9a3805f44c7c587a60d9360495cf9d95129047f4818bb7110ec1ec195\"" Apr 12 18:21:41.820841 env[1593]: time="2024-04-12T18:21:41.820725802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\"" Apr 12 18:21:43.895213 env[1593]: time="2024-04-12T18:21:43.895107702Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:43.899328 env[1593]: time="2024-04-12T18:21:43.899224569Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:43.903515 env[1593]: time="2024-04-12T18:21:43.903434818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:43.907693 env[1593]: time="2024-04-12T18:21:43.907605624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:43.910107 env[1593]: time="2024-04-12T18:21:43.909989181Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\" returns image reference \"sha256:4b51f9f6bc9b9a68473278361df0e8985109b56c7b649532c6bffcab2a8c65fb\"" Apr 12 18:21:43.929950 env[1593]: time="2024-04-12T18:21:43.929867165Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\"" Apr 12 18:21:44.533676 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:21:44.534153 systemd[1]: Stopped kubelet.service. Apr 12 18:21:44.537622 systemd[1]: Started kubelet.service. Apr 12 18:21:44.668865 kubelet[2065]: E0412 18:21:44.668787 2065 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:21:44.678523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:21:44.678873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:21:45.498894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648447315.mount: Deactivated successfully. Apr 12 18:21:46.376919 env[1593]: time="2024-04-12T18:21:46.376821697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:46.381080 env[1593]: time="2024-04-12T18:21:46.380980338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:46.383768 env[1593]: time="2024-04-12T18:21:46.383675214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:46.387010 env[1593]: time="2024-04-12T18:21:46.386917034Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:46.389042 env[1593]: time="2024-04-12T18:21:46.388919059Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\" returns image reference \"sha256:0e9b4a0d1e86d942f5ed93eaf751771e7602104cac5e15256c36967770ad2775\"" Apr 12 18:21:46.409551 env[1593]: time="2024-04-12T18:21:46.409443144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 12 18:21:47.101214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757453134.mount: Deactivated successfully. Apr 12 18:21:48.762927 env[1593]: time="2024-04-12T18:21:48.762771503Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:48.767123 env[1593]: time="2024-04-12T18:21:48.766998618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:48.773608 env[1593]: time="2024-04-12T18:21:48.773535273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:48.779288 env[1593]: time="2024-04-12T18:21:48.779173173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:48.783184 env[1593]: time="2024-04-12T18:21:48.783108613Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 12 18:21:48.802230 env[1593]: time="2024-04-12T18:21:48.802162985Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:21:49.312159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370313412.mount: Deactivated successfully. Apr 12 18:21:49.326136 env[1593]: time="2024-04-12T18:21:49.326068192Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:49.331788 env[1593]: time="2024-04-12T18:21:49.331719869Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:49.337430 env[1593]: time="2024-04-12T18:21:49.337357593Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:49.340447 env[1593]: time="2024-04-12T18:21:49.340385252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:49.341823 env[1593]: time="2024-04-12T18:21:49.341751968Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 12 18:21:49.359876 env[1593]: time="2024-04-12T18:21:49.359797503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Apr 12 18:21:49.957686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3901004967.mount: Deactivated successfully. Apr 12 18:21:51.957696 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 12 18:21:54.520579 env[1593]: time="2024-04-12T18:21:54.520340211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:54.532757 env[1593]: time="2024-04-12T18:21:54.532657727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:54.536777 env[1593]: time="2024-04-12T18:21:54.536710655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:54.541222 env[1593]: time="2024-04-12T18:21:54.541150006Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:21:54.543418 env[1593]: time="2024-04-12T18:21:54.543332255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Apr 12 18:21:54.783705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 12 18:21:54.784193 systemd[1]: Stopped kubelet.service. Apr 12 18:21:54.787397 systemd[1]: Started kubelet.service. Apr 12 18:21:54.916457 kubelet[2096]: E0412 18:21:54.916338 2096 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:21:54.921737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:21:54.922173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:22:00.481879 amazon-ssm-agent[1645]: 2024-04-12 18:22:00 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Apr 12 18:22:01.510390 systemd[1]: Stopped kubelet.service. Apr 12 18:22:01.547000 systemd[1]: Reloading. Apr 12 18:22:01.720344 /usr/lib/systemd/system-generators/torcx-generator[2186]: time="2024-04-12T18:22:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:22:01.721207 /usr/lib/systemd/system-generators/torcx-generator[2186]: time="2024-04-12T18:22:01Z" level=info msg="torcx already run" Apr 12 18:22:01.891402 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:22:01.891453 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:22:01.932144 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:22:02.184616 systemd[1]: Started kubelet.service. Apr 12 18:22:02.287433 kubelet[2237]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:22:02.287433 kubelet[2237]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:22:02.287433 kubelet[2237]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:22:02.288285 kubelet[2237]: I0412 18:22:02.287567 2237 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:22:03.259975 kubelet[2237]: I0412 18:22:03.259863 2237 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:22:03.259975 kubelet[2237]: I0412 18:22:03.259953 2237 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:22:03.260573 kubelet[2237]: I0412 18:22:03.260476 2237 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:22:03.274898 kubelet[2237]: I0412 18:22:03.274831 2237 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:22:03.275485 kubelet[2237]: E0412 18:22:03.275396 2237 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.247:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.287587 kubelet[2237]: I0412 18:22:03.287516 2237 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:22:03.288310 kubelet[2237]: I0412 18:22:03.288097 2237 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:22:03.288520 kubelet[2237]: I0412 18:22:03.288461 2237 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:22:03.288750 kubelet[2237]: I0412 18:22:03.288537 2237 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:22:03.288750 kubelet[2237]: I0412 18:22:03.288562 2237 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:22:03.288909 kubelet[2237]: I0412 18:22:03.288780 2237 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:22:03.289193 kubelet[2237]: I0412 18:22:03.289154 2237 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:22:03.289404 kubelet[2237]: I0412 18:22:03.289209 2237 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:22:03.289404 kubelet[2237]: I0412 18:22:03.289288 2237 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:22:03.289404 kubelet[2237]: I0412 18:22:03.289326 2237 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:22:03.291449 kubelet[2237]: W0412 18:22:03.291359 2237 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.18.247:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.291795 kubelet[2237]: E0412 18:22:03.291748 2237 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.247:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.292355 kubelet[2237]: W0412 18:22:03.292252 2237 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-247&limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.292725 kubelet[2237]: E0412 18:22:03.292675 2237 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-247&limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.293274 kubelet[2237]: I0412 18:22:03.293194 2237 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:22:03.294339 kubelet[2237]: I0412 18:22:03.294273 2237 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:22:03.294688 kubelet[2237]: W0412 18:22:03.294652 2237 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:22:03.296371 kubelet[2237]: I0412 18:22:03.296322 2237 server.go:1256] "Started kubelet" Apr 12 18:22:03.301292 kubelet[2237]: E0412 18:22:03.300353 2237 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.247:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.247:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-247.17c59b6c6e74f738 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-247,UID:ip-172-31-18-247,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-247,},FirstTimestamp:2024-04-12 18:22:03.296274232 +0000 UTC m=+1.098041494,LastTimestamp:2024-04-12 18:22:03.296274232 +0000 UTC m=+1.098041494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-247,}" Apr 12 18:22:03.302672 kubelet[2237]: I0412 18:22:03.301951 2237 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:22:03.303270 kubelet[2237]: I0412 18:22:03.303198 2237 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:22:03.303467 kubelet[2237]: I0412 18:22:03.303359 2237 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:22:03.307775 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:22:03.307957 kubelet[2237]: I0412 18:22:03.305082 2237 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:22:03.308798 kubelet[2237]: I0412 18:22:03.308753 2237 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:22:03.309989 kubelet[2237]: E0412 18:22:03.309922 2237 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:22:03.312354 kubelet[2237]: I0412 18:22:03.312304 2237 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:22:03.313523 kubelet[2237]: I0412 18:22:03.313480 2237 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:22:03.314508 kubelet[2237]: I0412 18:22:03.314472 2237 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:22:03.315498 kubelet[2237]: W0412 18:22:03.315416 2237 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.315775 kubelet[2237]: E0412 18:22:03.315742 2237 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.316444 kubelet[2237]: E0412 18:22:03.316386 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-247?timeout=10s\": dial tcp 172.31.18.247:6443: connect: connection refused" interval="200ms" Apr 12 18:22:03.318223 kubelet[2237]: I0412 18:22:03.318166 2237 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:22:03.318744 kubelet[2237]: I0412 18:22:03.318685 2237 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:22:03.323663 kubelet[2237]: I0412 18:22:03.323596 2237 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:22:03.359853 kubelet[2237]: I0412 18:22:03.359782 2237 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:22:03.359853 kubelet[2237]: I0412 18:22:03.359834 2237 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:22:03.360181 kubelet[2237]: I0412 18:22:03.359873 2237 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:22:03.363237 kubelet[2237]: I0412 18:22:03.363171 2237 policy_none.go:49] "None policy: Start" Apr 12 18:22:03.364902 kubelet[2237]: I0412 18:22:03.364849 2237 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:22:03.365106 kubelet[2237]: I0412 18:22:03.364919 2237 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:22:03.378549 systemd[1]: Created slice kubepods.slice. Apr 12 18:22:03.396085 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:22:03.397622 kubelet[2237]: I0412 18:22:03.396525 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:22:03.405560 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:22:03.407397 kubelet[2237]: I0412 18:22:03.407236 2237 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:22:03.407397 kubelet[2237]: I0412 18:22:03.407292 2237 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:22:03.407397 kubelet[2237]: I0412 18:22:03.407339 2237 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:22:03.407749 kubelet[2237]: E0412 18:22:03.407614 2237 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:22:03.410718 kubelet[2237]: W0412 18:22:03.410619 2237 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.411100 kubelet[2237]: E0412 18:22:03.411065 2237 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:03.414721 kubelet[2237]: I0412 18:22:03.414661 2237 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:22:03.417735 kubelet[2237]: I0412 18:22:03.417676 2237 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:22:03.425682 kubelet[2237]: I0412 18:22:03.424300 2237 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-247" Apr 12 18:22:03.427394 kubelet[2237]: E0412 18:22:03.427341 2237 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-247\" not found" Apr 12 18:22:03.427944 kubelet[2237]: E0412 18:22:03.427834 2237 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.247:6443/api/v1/nodes\": dial tcp 172.31.18.247:6443: connect: connection refused" node="ip-172-31-18-247" Apr 12 18:22:03.508609 kubelet[2237]: I0412 18:22:03.508548 2237 topology_manager.go:215] "Topology Admit Handler" podUID="400369484f889a03e632b5177986a51c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-247" Apr 12 18:22:03.511321 kubelet[2237]: I0412 18:22:03.511160 2237 topology_manager.go:215] "Topology Admit Handler" podUID="9faafe6b5f4dc561771e71a8afa35f66" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:03.515743 kubelet[2237]: I0412 18:22:03.515689 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/400369484f889a03e632b5177986a51c-ca-certs\") pod \"kube-apiserver-ip-172-31-18-247\" (UID: \"400369484f889a03e632b5177986a51c\") " pod="kube-system/kube-apiserver-ip-172-31-18-247" Apr 12 18:22:03.516216 kubelet[2237]: I0412 18:22:03.516174 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/400369484f889a03e632b5177986a51c-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-247\" (UID: \"400369484f889a03e632b5177986a51c\") " pod="kube-system/kube-apiserver-ip-172-31-18-247" Apr 12 18:22:03.516593 kubelet[2237]: I0412 18:22:03.516507 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/400369484f889a03e632b5177986a51c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-247\" (UID: \"400369484f889a03e632b5177986a51c\") " pod="kube-system/kube-apiserver-ip-172-31-18-247" Apr 12 18:22:03.517883 kubelet[2237]: I0412 18:22:03.517805 2237 topology_manager.go:215] "Topology Admit Handler" podUID="6fbf90375cdac9b9ba40403531591abd" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-247" Apr 12 18:22:03.518184 kubelet[2237]: I0412 18:22:03.518140 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:03.519471 kubelet[2237]: I0412 18:22:03.519363 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:03.519471 kubelet[2237]: I0412 18:22:03.519454 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:03.519717 kubelet[2237]: I0412 18:22:03.519504 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:03.519717 kubelet[2237]: I0412 18:22:03.519556 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:03.519717 kubelet[2237]: E0412 18:22:03.518884 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-247?timeout=10s\": dial tcp 172.31.18.247:6443: connect: connection refused" interval="400ms" Apr 12 18:22:03.530374 systemd[1]: Created slice kubepods-burstable-pod400369484f889a03e632b5177986a51c.slice. Apr 12 18:22:03.551825 systemd[1]: Created slice kubepods-burstable-pod9faafe6b5f4dc561771e71a8afa35f66.slice. Apr 12 18:22:03.559723 systemd[1]: Created slice kubepods-burstable-pod6fbf90375cdac9b9ba40403531591abd.slice. Apr 12 18:22:03.620237 kubelet[2237]: I0412 18:22:03.620187 2237 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fbf90375cdac9b9ba40403531591abd-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-247\" (UID: \"6fbf90375cdac9b9ba40403531591abd\") " pod="kube-system/kube-scheduler-ip-172-31-18-247" Apr 12 18:22:03.631918 kubelet[2237]: I0412 18:22:03.631868 2237 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-247" Apr 12 18:22:03.632536 kubelet[2237]: E0412 18:22:03.632497 2237 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.247:6443/api/v1/nodes\": dial tcp 172.31.18.247:6443: connect: connection refused" node="ip-172-31-18-247" Apr 12 18:22:03.843750 env[1593]: time="2024-04-12T18:22:03.843676541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-247,Uid:400369484f889a03e632b5177986a51c,Namespace:kube-system,Attempt:0,}" Apr 12 18:22:03.858919 env[1593]: time="2024-04-12T18:22:03.858513343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-247,Uid:9faafe6b5f4dc561771e71a8afa35f66,Namespace:kube-system,Attempt:0,}" Apr 12 18:22:03.866846 env[1593]: time="2024-04-12T18:22:03.866761863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-247,Uid:6fbf90375cdac9b9ba40403531591abd,Namespace:kube-system,Attempt:0,}" Apr 12 18:22:03.921129 kubelet[2237]: E0412 18:22:03.921002 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-247?timeout=10s\": dial tcp 172.31.18.247:6443: connect: connection refused" interval="800ms" Apr 12 18:22:04.035135 kubelet[2237]: I0412 18:22:04.035083 2237 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-247" Apr 12 18:22:04.035699 kubelet[2237]: E0412 18:22:04.035660 2237 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.247:6443/api/v1/nodes\": dial tcp 172.31.18.247:6443: connect: connection refused" node="ip-172-31-18-247" Apr 12 18:22:04.169710 kubelet[2237]: W0412 18:22:04.169460 2237 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-247&limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:04.169710 kubelet[2237]: E0412 18:22:04.169562 2237 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.247:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-247&limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:04.285062 kubelet[2237]: W0412 18:22:04.284859 2237 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.18.247:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:04.285062 kubelet[2237]: E0412 18:22:04.284954 2237 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.247:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:04.373777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723719764.mount: Deactivated successfully. Apr 12 18:22:04.389100 env[1593]: time="2024-04-12T18:22:04.388991129Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.391339 env[1593]: time="2024-04-12T18:22:04.391271157Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.396769 env[1593]: time="2024-04-12T18:22:04.396705266Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.399318 env[1593]: time="2024-04-12T18:22:04.399257162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.406590 env[1593]: time="2024-04-12T18:22:04.406506861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.408877 env[1593]: time="2024-04-12T18:22:04.408796323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.410635 env[1593]: time="2024-04-12T18:22:04.410562646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.412395 env[1593]: time="2024-04-12T18:22:04.412336997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.419361 env[1593]: time="2024-04-12T18:22:04.419241557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.426890 env[1593]: time="2024-04-12T18:22:04.426716244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.429185 env[1593]: time="2024-04-12T18:22:04.429122653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.443187 env[1593]: time="2024-04-12T18:22:04.443084317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:04.460239 env[1593]: time="2024-04-12T18:22:04.460084398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:22:04.460239 env[1593]: time="2024-04-12T18:22:04.460172036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:22:04.460548 env[1593]: time="2024-04-12T18:22:04.460200973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:22:04.461018 env[1593]: time="2024-04-12T18:22:04.460914271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79991a12abe18d0a9687b2d3f0c6edae54ab7fee93341f9a96c2d4fa17803b46 pid=2277 runtime=io.containerd.runc.v2 Apr 12 18:22:04.504228 systemd[1]: Started cri-containerd-79991a12abe18d0a9687b2d3f0c6edae54ab7fee93341f9a96c2d4fa17803b46.scope. Apr 12 18:22:04.532218 env[1593]: time="2024-04-12T18:22:04.532070094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:22:04.532406 env[1593]: time="2024-04-12T18:22:04.532243366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:22:04.532406 env[1593]: time="2024-04-12T18:22:04.532334652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:22:04.532851 env[1593]: time="2024-04-12T18:22:04.532693090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b0e231644c5f4c93aff56c116d17c94cbb83e1d55c99bc5c5cc1119a3b95af6a pid=2307 runtime=io.containerd.runc.v2 Apr 12 18:22:04.532851 env[1593]: time="2024-04-12T18:22:04.532166230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:22:04.532851 env[1593]: time="2024-04-12T18:22:04.532695490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:22:04.532851 env[1593]: time="2024-04-12T18:22:04.532724439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:22:04.538177 env[1593]: time="2024-04-12T18:22:04.533735636Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1e4796f635f7dcb136caefbd9567ca10854767ee61a18a226a6ce65bcf4f163 pid=2310 runtime=io.containerd.runc.v2 Apr 12 18:22:04.584924 systemd[1]: Started cri-containerd-b0e231644c5f4c93aff56c116d17c94cbb83e1d55c99bc5c5cc1119a3b95af6a.scope. Apr 12 18:22:04.605107 systemd[1]: Started cri-containerd-f1e4796f635f7dcb136caefbd9567ca10854767ee61a18a226a6ce65bcf4f163.scope. Apr 12 18:22:04.655205 env[1593]: time="2024-04-12T18:22:04.654592821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-247,Uid:9faafe6b5f4dc561771e71a8afa35f66,Namespace:kube-system,Attempt:0,} returns sandbox id \"79991a12abe18d0a9687b2d3f0c6edae54ab7fee93341f9a96c2d4fa17803b46\"" Apr 12 18:22:04.675207 env[1593]: time="2024-04-12T18:22:04.675119043Z" level=info msg="CreateContainer within sandbox \"79991a12abe18d0a9687b2d3f0c6edae54ab7fee93341f9a96c2d4fa17803b46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:22:04.703487 kubelet[2237]: W0412 18:22:04.702969 2237 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:04.703487 kubelet[2237]: E0412 18:22:04.703101 2237 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:04.709826 kubelet[2237]: W0412 18:22:04.709711 2237 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:04.709826 kubelet[2237]: E0412 18:22:04.709782 2237 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.247:6443: connect: connection refused Apr 12 18:22:04.726449 kubelet[2237]: E0412 18:22:04.722647 2237 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-247?timeout=10s\": dial tcp 172.31.18.247:6443: connect: connection refused" interval="1.6s" Apr 12 18:22:04.740774 env[1593]: time="2024-04-12T18:22:04.740692821Z" level=info msg="CreateContainer within sandbox \"79991a12abe18d0a9687b2d3f0c6edae54ab7fee93341f9a96c2d4fa17803b46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f\"" Apr 12 18:22:04.742950 env[1593]: time="2024-04-12T18:22:04.742857644Z" level=info msg="StartContainer for \"eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f\"" Apr 12 18:22:04.753432 env[1593]: time="2024-04-12T18:22:04.753346828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-247,Uid:6fbf90375cdac9b9ba40403531591abd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0e231644c5f4c93aff56c116d17c94cbb83e1d55c99bc5c5cc1119a3b95af6a\"" Apr 12 18:22:04.760177 env[1593]: time="2024-04-12T18:22:04.760117399Z" level=info msg="CreateContainer within sandbox \"b0e231644c5f4c93aff56c116d17c94cbb83e1d55c99bc5c5cc1119a3b95af6a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:22:04.766800 env[1593]: time="2024-04-12T18:22:04.766740021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-247,Uid:400369484f889a03e632b5177986a51c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1e4796f635f7dcb136caefbd9567ca10854767ee61a18a226a6ce65bcf4f163\"" Apr 12 18:22:04.777221 env[1593]: time="2024-04-12T18:22:04.777119245Z" level=info msg="CreateContainer within sandbox \"f1e4796f635f7dcb136caefbd9567ca10854767ee61a18a226a6ce65bcf4f163\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:22:04.806420 env[1593]: time="2024-04-12T18:22:04.806262244Z" level=info msg="CreateContainer within sandbox \"b0e231644c5f4c93aff56c116d17c94cbb83e1d55c99bc5c5cc1119a3b95af6a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f\"" Apr 12 18:22:04.809604 systemd[1]: Started cri-containerd-eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f.scope. Apr 12 18:22:04.815362 env[1593]: time="2024-04-12T18:22:04.815227929Z" level=info msg="StartContainer for \"ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f\"" Apr 12 18:22:04.825692 env[1593]: time="2024-04-12T18:22:04.825568314Z" level=info msg="CreateContainer within sandbox \"f1e4796f635f7dcb136caefbd9567ca10854767ee61a18a226a6ce65bcf4f163\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ed380c0f0dca38e4d464bf146ae7e7829e41c1c9d6830483f3f434b4a49a91ba\"" Apr 12 18:22:04.827798 env[1593]: time="2024-04-12T18:22:04.827689029Z" level=info msg="StartContainer for \"ed380c0f0dca38e4d464bf146ae7e7829e41c1c9d6830483f3f434b4a49a91ba\"" Apr 12 18:22:04.845095 kubelet[2237]: I0412 18:22:04.845001 2237 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-247" Apr 12 18:22:04.845849 kubelet[2237]: E0412 18:22:04.845789 2237 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.247:6443/api/v1/nodes\": dial tcp 172.31.18.247:6443: connect: connection refused" node="ip-172-31-18-247" Apr 12 18:22:04.901327 systemd[1]: Started cri-containerd-ed380c0f0dca38e4d464bf146ae7e7829e41c1c9d6830483f3f434b4a49a91ba.scope. Apr 12 18:22:04.919401 systemd[1]: Started cri-containerd-ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f.scope. Apr 12 18:22:04.994046 env[1593]: time="2024-04-12T18:22:04.993783546Z" level=info msg="StartContainer for \"eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f\" returns successfully" Apr 12 18:22:05.085106 env[1593]: time="2024-04-12T18:22:05.084969919Z" level=info msg="StartContainer for \"ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f\" returns successfully" Apr 12 18:22:05.087768 env[1593]: time="2024-04-12T18:22:05.087689448Z" level=info msg="StartContainer for \"ed380c0f0dca38e4d464bf146ae7e7829e41c1c9d6830483f3f434b4a49a91ba\" returns successfully" Apr 12 18:22:06.387845 update_engine[1583]: I0412 18:22:06.387203 1583 update_attempter.cc:509] Updating boot flags... Apr 12 18:22:06.449931 kubelet[2237]: I0412 18:22:06.449890 2237 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-247" Apr 12 18:22:09.521928 kubelet[2237]: E0412 18:22:09.521858 2237 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-247\" not found" node="ip-172-31-18-247" Apr 12 18:22:09.563932 kubelet[2237]: I0412 18:22:09.563883 2237 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-247" Apr 12 18:22:09.734202 kubelet[2237]: E0412 18:22:09.734154 2237 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-247.17c59b6c6e74f738 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-247,UID:ip-172-31-18-247,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-247,},FirstTimestamp:2024-04-12 18:22:03.296274232 +0000 UTC m=+1.098041494,LastTimestamp:2024-04-12 18:22:03.296274232 +0000 UTC m=+1.098041494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-247,}" Apr 12 18:22:10.295454 kubelet[2237]: I0412 18:22:10.295377 2237 apiserver.go:52] "Watching apiserver" Apr 12 18:22:10.315755 kubelet[2237]: I0412 18:22:10.315679 2237 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:22:12.946181 systemd[1]: Reloading. Apr 12 18:22:13.185498 /usr/lib/systemd/system-generators/torcx-generator[2716]: time="2024-04-12T18:22:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:22:13.189394 /usr/lib/systemd/system-generators/torcx-generator[2716]: time="2024-04-12T18:22:13Z" level=info msg="torcx already run" Apr 12 18:22:13.401330 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:22:13.403407 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:22:13.483194 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:22:13.796185 systemd[1]: Stopping kubelet.service... Apr 12 18:22:13.811753 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:22:13.812264 systemd[1]: Stopped kubelet.service. Apr 12 18:22:13.812375 systemd[1]: kubelet.service: Consumed 1.829s CPU time. Apr 12 18:22:13.818745 systemd[1]: Started kubelet.service. Apr 12 18:22:13.978807 sudo[2774]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:22:13.984884 sudo[2774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:22:14.001575 kubelet[2764]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:22:14.002210 kubelet[2764]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:22:14.002210 kubelet[2764]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:22:14.002210 kubelet[2764]: I0412 18:22:14.001871 2764 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:22:14.018540 kubelet[2764]: I0412 18:22:14.018473 2764 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:22:14.018540 kubelet[2764]: I0412 18:22:14.018531 2764 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:22:14.018995 kubelet[2764]: I0412 18:22:14.018937 2764 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:22:14.023448 kubelet[2764]: I0412 18:22:14.023379 2764 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:22:14.029441 kubelet[2764]: I0412 18:22:14.029359 2764 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:22:14.049624 kubelet[2764]: I0412 18:22:14.049458 2764 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:22:14.050143 kubelet[2764]: I0412 18:22:14.049957 2764 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:22:14.051417 kubelet[2764]: I0412 18:22:14.050434 2764 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:22:14.051417 kubelet[2764]: I0412 18:22:14.050521 2764 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:22:14.051417 kubelet[2764]: I0412 18:22:14.050553 2764 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:22:14.051417 kubelet[2764]: I0412 18:22:14.050658 2764 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:22:14.051933 kubelet[2764]: I0412 18:22:14.051433 2764 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:22:14.051933 kubelet[2764]: I0412 18:22:14.051490 2764 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:22:14.051933 kubelet[2764]: I0412 18:22:14.051540 2764 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:22:14.051933 kubelet[2764]: I0412 18:22:14.051574 2764 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:22:14.053924 kubelet[2764]: I0412 18:22:14.053855 2764 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:22:14.054447 kubelet[2764]: I0412 18:22:14.054389 2764 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:22:14.060912 kubelet[2764]: I0412 18:22:14.055282 2764 server.go:1256] "Started kubelet" Apr 12 18:22:14.084062 kubelet[2764]: I0412 18:22:14.078967 2764 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:22:14.093518 kubelet[2764]: I0412 18:22:14.091867 2764 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:22:14.095984 kubelet[2764]: I0412 18:22:14.095933 2764 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:22:14.102897 kubelet[2764]: I0412 18:22:14.102839 2764 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:22:14.103590 kubelet[2764]: I0412 18:22:14.103541 2764 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:22:14.104068 kubelet[2764]: I0412 18:22:14.103975 2764 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:22:14.115079 kubelet[2764]: I0412 18:22:14.111977 2764 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:22:14.115079 kubelet[2764]: I0412 18:22:14.112385 2764 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:22:14.141194 kubelet[2764]: I0412 18:22:14.141125 2764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:22:14.152675 kubelet[2764]: I0412 18:22:14.152624 2764 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:22:14.152911 kubelet[2764]: I0412 18:22:14.152884 2764 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:22:14.171976 kubelet[2764]: I0412 18:22:14.171930 2764 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:22:14.216051 kubelet[2764]: E0412 18:22:14.215984 2764 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:22:14.216307 kubelet[2764]: I0412 18:22:14.190683 2764 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:22:14.216464 kubelet[2764]: I0412 18:22:14.216440 2764 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:22:14.216771 kubelet[2764]: I0412 18:22:14.216724 2764 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:22:14.238093 kubelet[2764]: E0412 18:22:14.197699 2764 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:22:14.243965 kubelet[2764]: E0412 18:22:14.211533 2764 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Apr 12 18:22:14.256754 kubelet[2764]: I0412 18:22:14.256687 2764 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-247" Apr 12 18:22:14.304546 kubelet[2764]: I0412 18:22:14.304388 2764 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-247" Apr 12 18:22:14.304722 kubelet[2764]: I0412 18:22:14.304549 2764 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-247" Apr 12 18:22:14.317012 kubelet[2764]: E0412 18:22:14.316876 2764 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:22:14.325794 kubelet[2764]: E0412 18:22:14.325745 2764 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Apr 12 18:22:14.413938 kubelet[2764]: I0412 18:22:14.413887 2764 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:22:14.414207 kubelet[2764]: I0412 18:22:14.414172 2764 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:22:14.414430 kubelet[2764]: I0412 18:22:14.414397 2764 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:22:14.414936 kubelet[2764]: I0412 18:22:14.414881 2764 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:22:14.415343 kubelet[2764]: I0412 18:22:14.415293 2764 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 18:22:14.415547 kubelet[2764]: I0412 18:22:14.415515 2764 policy_none.go:49] "None policy: Start" Apr 12 18:22:14.421881 kubelet[2764]: I0412 18:22:14.421748 2764 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:22:14.421881 kubelet[2764]: I0412 18:22:14.421824 2764 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:22:14.422767 kubelet[2764]: I0412 18:22:14.422694 2764 state_mem.go:75] "Updated machine memory state" Apr 12 18:22:14.450832 kubelet[2764]: I0412 18:22:14.450744 2764 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:22:14.461448 kubelet[2764]: I0412 18:22:14.461381 2764 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:22:14.517878 kubelet[2764]: I0412 18:22:14.517817 2764 topology_manager.go:215] "Topology Admit Handler" podUID="400369484f889a03e632b5177986a51c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-247" Apr 12 18:22:14.520276 kubelet[2764]: I0412 18:22:14.520197 2764 topology_manager.go:215] "Topology Admit Handler" podUID="9faafe6b5f4dc561771e71a8afa35f66" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:14.520476 kubelet[2764]: I0412 18:22:14.520399 2764 topology_manager.go:215] "Topology Admit Handler" podUID="6fbf90375cdac9b9ba40403531591abd" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-247" Apr 12 18:22:14.537378 kubelet[2764]: E0412 18:22:14.537329 2764 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-18-247\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-247" Apr 12 18:22:14.621261 kubelet[2764]: I0412 18:22:14.621117 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:14.621261 kubelet[2764]: I0412 18:22:14.621207 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/400369484f889a03e632b5177986a51c-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-247\" (UID: \"400369484f889a03e632b5177986a51c\") " pod="kube-system/kube-apiserver-ip-172-31-18-247" Apr 12 18:22:14.621261 kubelet[2764]: I0412 18:22:14.621263 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:14.621542 kubelet[2764]: I0412 18:22:14.621315 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:14.621542 kubelet[2764]: I0412 18:22:14.621364 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fbf90375cdac9b9ba40403531591abd-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-247\" (UID: \"6fbf90375cdac9b9ba40403531591abd\") " pod="kube-system/kube-scheduler-ip-172-31-18-247" Apr 12 18:22:14.621542 kubelet[2764]: I0412 18:22:14.621452 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/400369484f889a03e632b5177986a51c-ca-certs\") pod \"kube-apiserver-ip-172-31-18-247\" (UID: \"400369484f889a03e632b5177986a51c\") " pod="kube-system/kube-apiserver-ip-172-31-18-247" Apr 12 18:22:14.621542 kubelet[2764]: I0412 18:22:14.621501 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/400369484f889a03e632b5177986a51c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-247\" (UID: \"400369484f889a03e632b5177986a51c\") " pod="kube-system/kube-apiserver-ip-172-31-18-247" Apr 12 18:22:14.621860 kubelet[2764]: I0412 18:22:14.621550 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:14.621860 kubelet[2764]: I0412 18:22:14.621606 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9faafe6b5f4dc561771e71a8afa35f66-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-247\" (UID: \"9faafe6b5f4dc561771e71a8afa35f66\") " pod="kube-system/kube-controller-manager-ip-172-31-18-247" Apr 12 18:22:15.075397 kubelet[2764]: I0412 18:22:15.075335 2764 apiserver.go:52] "Watching apiserver" Apr 12 18:22:15.113231 kubelet[2764]: I0412 18:22:15.113167 2764 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:22:15.208543 sudo[2774]: pam_unix(sudo:session): session closed for user root Apr 12 18:22:15.349121 kubelet[2764]: E0412 18:22:15.348901 2764 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-247\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-247" Apr 12 18:22:15.393545 kubelet[2764]: I0412 18:22:15.393493 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-247" podStartSLOduration=1.393425229 podStartE2EDuration="1.393425229s" podCreationTimestamp="2024-04-12 18:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:22:15.368817705 +0000 UTC m=+1.536063352" watchObservedRunningTime="2024-04-12 18:22:15.393425229 +0000 UTC m=+1.560670804" Apr 12 18:22:15.754456 kubelet[2764]: I0412 18:22:15.754283 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-247" podStartSLOduration=1.754223862 podStartE2EDuration="1.754223862s" podCreationTimestamp="2024-04-12 18:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:22:15.432518958 +0000 UTC m=+1.599764545" watchObservedRunningTime="2024-04-12 18:22:15.754223862 +0000 UTC m=+1.921469449" Apr 12 18:22:18.494473 sudo[1823]: pam_unix(sudo:session): session closed for user root Apr 12 18:22:18.518310 sshd[1820]: pam_unix(sshd:session): session closed for user core Apr 12 18:22:18.524168 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:22:18.524580 systemd[1]: session-5.scope: Consumed 11.513s CPU time. Apr 12 18:22:18.526607 systemd[1]: sshd@4-172.31.18.247:22-139.178.89.65:58010.service: Deactivated successfully. Apr 12 18:22:18.528048 systemd-logind[1581]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:22:18.530810 systemd-logind[1581]: Removed session 5. Apr 12 18:22:27.197769 kubelet[2764]: I0412 18:22:27.197710 2764 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:22:27.199444 env[1593]: time="2024-04-12T18:22:27.199358348Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:22:27.200975 kubelet[2764]: I0412 18:22:27.200907 2764 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:22:27.349190 kubelet[2764]: I0412 18:22:27.349093 2764 topology_manager.go:215] "Topology Admit Handler" podUID="2bb14686-8746-4c88-a167-cc594467e4af" podNamespace="kube-system" podName="kube-proxy-mgzj5" Apr 12 18:22:27.364307 systemd[1]: Created slice kubepods-besteffort-pod2bb14686_8746_4c88_a167_cc594467e4af.slice. Apr 12 18:22:27.370117 kubelet[2764]: I0412 18:22:27.369996 2764 topology_manager.go:215] "Topology Admit Handler" podUID="9a609aad-a90d-41cc-81de-f04da6609c50" podNamespace="kube-system" podName="cilium-bdgjk" Apr 12 18:22:27.390005 systemd[1]: Created slice kubepods-burstable-pod9a609aad_a90d_41cc_81de_f04da6609c50.slice. Apr 12 18:22:27.407109 kubelet[2764]: W0412 18:22:27.407062 2764 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.407391 kubelet[2764]: E0412 18:22:27.407360 2764 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.410643 kubelet[2764]: I0412 18:22:27.410581 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-bpf-maps\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.410858 kubelet[2764]: I0412 18:22:27.410691 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-etc-cni-netd\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.410973 kubelet[2764]: I0412 18:22:27.410886 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bb14686-8746-4c88-a167-cc594467e4af-xtables-lock\") pod \"kube-proxy-mgzj5\" (UID: \"2bb14686-8746-4c88-a167-cc594467e4af\") " pod="kube-system/kube-proxy-mgzj5" Apr 12 18:22:27.411127 kubelet[2764]: I0412 18:22:27.411069 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cni-path\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.411215 kubelet[2764]: I0412 18:22:27.411165 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bb14686-8746-4c88-a167-cc594467e4af-lib-modules\") pod \"kube-proxy-mgzj5\" (UID: \"2bb14686-8746-4c88-a167-cc594467e4af\") " pod="kube-system/kube-proxy-mgzj5" Apr 12 18:22:27.411300 kubelet[2764]: I0412 18:22:27.411248 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfctx\" (UniqueName: \"kubernetes.io/projected/2bb14686-8746-4c88-a167-cc594467e4af-kube-api-access-xfctx\") pod \"kube-proxy-mgzj5\" (UID: \"2bb14686-8746-4c88-a167-cc594467e4af\") " pod="kube-system/kube-proxy-mgzj5" Apr 12 18:22:27.411392 kubelet[2764]: I0412 18:22:27.411329 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-host-proc-sys-net\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.411469 kubelet[2764]: I0412 18:22:27.411388 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-cgroup\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.411562 kubelet[2764]: I0412 18:22:27.411466 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-xtables-lock\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.411651 kubelet[2764]: I0412 18:22:27.411564 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-config-path\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.411729 kubelet[2764]: I0412 18:22:27.411651 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-run\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.411821 kubelet[2764]: I0412 18:22:27.411729 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2bb14686-8746-4c88-a167-cc594467e4af-kube-proxy\") pod \"kube-proxy-mgzj5\" (UID: \"2bb14686-8746-4c88-a167-cc594467e4af\") " pod="kube-system/kube-proxy-mgzj5" Apr 12 18:22:27.411821 kubelet[2764]: I0412 18:22:27.411813 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-lib-modules\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.411958 kubelet[2764]: I0412 18:22:27.411891 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66vzj\" (UniqueName: \"kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-kube-api-access-66vzj\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.412064 kubelet[2764]: I0412 18:22:27.411974 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-host-proc-sys-kernel\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.412162 kubelet[2764]: I0412 18:22:27.412068 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-hubble-tls\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.412252 kubelet[2764]: I0412 18:22:27.412169 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a609aad-a90d-41cc-81de-f04da6609c50-clustermesh-secrets\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.412329 kubelet[2764]: I0412 18:22:27.412251 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-hostproc\") pod \"cilium-bdgjk\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " pod="kube-system/cilium-bdgjk" Apr 12 18:22:27.413624 kubelet[2764]: W0412 18:22:27.413574 2764 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.413931 kubelet[2764]: E0412 18:22:27.413895 2764 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.414194 kubelet[2764]: W0412 18:22:27.413580 2764 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.414446 kubelet[2764]: E0412 18:22:27.414411 2764 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.414647 kubelet[2764]: W0412 18:22:27.413670 2764 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.414846 kubelet[2764]: E0412 18:22:27.414816 2764 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.415001 kubelet[2764]: W0412 18:22:27.413847 2764 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.415222 kubelet[2764]: E0412 18:22:27.415193 2764 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:27.723863 kubelet[2764]: I0412 18:22:27.723769 2764 topology_manager.go:215] "Topology Admit Handler" podUID="dce65d8d-6362-4b97-8575-6f19d7f56f90" podNamespace="kube-system" podName="cilium-operator-5cc964979-btntv" Apr 12 18:22:27.764079 systemd[1]: Created slice kubepods-besteffort-poddce65d8d_6362_4b97_8575_6f19d7f56f90.slice. Apr 12 18:22:27.815338 kubelet[2764]: I0412 18:22:27.815274 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8nfd\" (UniqueName: \"kubernetes.io/projected/dce65d8d-6362-4b97-8575-6f19d7f56f90-kube-api-access-z8nfd\") pod \"cilium-operator-5cc964979-btntv\" (UID: \"dce65d8d-6362-4b97-8575-6f19d7f56f90\") " pod="kube-system/cilium-operator-5cc964979-btntv" Apr 12 18:22:27.815870 kubelet[2764]: I0412 18:22:27.815817 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dce65d8d-6362-4b97-8575-6f19d7f56f90-cilium-config-path\") pod \"cilium-operator-5cc964979-btntv\" (UID: \"dce65d8d-6362-4b97-8575-6f19d7f56f90\") " pod="kube-system/cilium-operator-5cc964979-btntv" Apr 12 18:22:28.513241 kubelet[2764]: E0412 18:22:28.513186 2764 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:28.513960 kubelet[2764]: E0412 18:22:28.513388 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2bb14686-8746-4c88-a167-cc594467e4af-kube-proxy podName:2bb14686-8746-4c88-a167-cc594467e4af nodeName:}" failed. No retries permitted until 2024-04-12 18:22:29.013320057 +0000 UTC m=+15.180565632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2bb14686-8746-4c88-a167-cc594467e4af-kube-proxy") pod "kube-proxy-mgzj5" (UID: "2bb14686-8746-4c88-a167-cc594467e4af") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:28.514160 kubelet[2764]: E0412 18:22:28.514005 2764 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 12 18:22:28.514252 kubelet[2764]: E0412 18:22:28.514205 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a609aad-a90d-41cc-81de-f04da6609c50-clustermesh-secrets podName:9a609aad-a90d-41cc-81de-f04da6609c50 nodeName:}" failed. No retries permitted until 2024-04-12 18:22:29.014172036 +0000 UTC m=+15.181417611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9a609aad-a90d-41cc-81de-f04da6609c50-clustermesh-secrets") pod "cilium-bdgjk" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:22:28.514386 kubelet[2764]: E0412 18:22:28.514345 2764 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 12 18:22:28.514386 kubelet[2764]: E0412 18:22:28.514373 2764 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-bdgjk: failed to sync secret cache: timed out waiting for the condition Apr 12 18:22:28.515097 kubelet[2764]: E0412 18:22:28.514534 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-hubble-tls podName:9a609aad-a90d-41cc-81de-f04da6609c50 nodeName:}" failed. No retries permitted until 2024-04-12 18:22:29.014480094 +0000 UTC m=+15.181725669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-hubble-tls") pod "cilium-bdgjk" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:22:28.552279 kubelet[2764]: E0412 18:22:28.552157 2764 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:28.552469 kubelet[2764]: E0412 18:22:28.552294 2764 projected.go:200] Error preparing data for projected volume kube-api-access-xfctx for pod kube-system/kube-proxy-mgzj5: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:28.552469 kubelet[2764]: E0412 18:22:28.552424 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2bb14686-8746-4c88-a167-cc594467e4af-kube-api-access-xfctx podName:2bb14686-8746-4c88-a167-cc594467e4af nodeName:}" failed. No retries permitted until 2024-04-12 18:22:29.052392114 +0000 UTC m=+15.219637689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xfctx" (UniqueName: "kubernetes.io/projected/2bb14686-8746-4c88-a167-cc594467e4af-kube-api-access-xfctx") pod "kube-proxy-mgzj5" (UID: "2bb14686-8746-4c88-a167-cc594467e4af") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:28.561536 kubelet[2764]: E0412 18:22:28.561460 2764 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:28.561756 kubelet[2764]: E0412 18:22:28.561551 2764 projected.go:200] Error preparing data for projected volume kube-api-access-66vzj for pod kube-system/cilium-bdgjk: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:28.561756 kubelet[2764]: E0412 18:22:28.561679 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-kube-api-access-66vzj podName:9a609aad-a90d-41cc-81de-f04da6609c50 nodeName:}" failed. No retries permitted until 2024-04-12 18:22:29.061645037 +0000 UTC m=+15.228890612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-66vzj" (UniqueName: "kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-kube-api-access-66vzj") pod "cilium-bdgjk" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:28.971684 env[1593]: time="2024-04-12T18:22:28.971584170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-btntv,Uid:dce65d8d-6362-4b97-8575-6f19d7f56f90,Namespace:kube-system,Attempt:0,}" Apr 12 18:22:29.018273 env[1593]: time="2024-04-12T18:22:29.018078567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:22:29.018505 env[1593]: time="2024-04-12T18:22:29.018308453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:22:29.018505 env[1593]: time="2024-04-12T18:22:29.018409667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:22:29.019246 env[1593]: time="2024-04-12T18:22:29.019018474Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d pid=2843 runtime=io.containerd.runc.v2 Apr 12 18:22:29.075564 systemd[1]: Started cri-containerd-ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d.scope. Apr 12 18:22:29.182774 env[1593]: time="2024-04-12T18:22:29.182700663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgzj5,Uid:2bb14686-8746-4c88-a167-cc594467e4af,Namespace:kube-system,Attempt:0,}" Apr 12 18:22:29.184759 env[1593]: time="2024-04-12T18:22:29.184690138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-btntv,Uid:dce65d8d-6362-4b97-8575-6f19d7f56f90,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d\"" Apr 12 18:22:29.190096 env[1593]: time="2024-04-12T18:22:29.189133491Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:22:29.198498 env[1593]: time="2024-04-12T18:22:29.198253250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bdgjk,Uid:9a609aad-a90d-41cc-81de-f04da6609c50,Namespace:kube-system,Attempt:0,}" Apr 12 18:22:29.239137 env[1593]: time="2024-04-12T18:22:29.238801506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:22:29.239137 env[1593]: time="2024-04-12T18:22:29.239018982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:22:29.241290 env[1593]: time="2024-04-12T18:22:29.241165282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:22:29.241853 env[1593]: time="2024-04-12T18:22:29.241675036Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2b9ec175b4c65285c4abccf05256ae071e11c19b9c29ca8ebaa1e18ebfad15f pid=2888 runtime=io.containerd.runc.v2 Apr 12 18:22:29.248358 env[1593]: time="2024-04-12T18:22:29.248155518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:22:29.248637 env[1593]: time="2024-04-12T18:22:29.248400117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:22:29.248637 env[1593]: time="2024-04-12T18:22:29.248501522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:22:29.249182 env[1593]: time="2024-04-12T18:22:29.249045982Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b pid=2903 runtime=io.containerd.runc.v2 Apr 12 18:22:29.276230 systemd[1]: Started cri-containerd-d2b9ec175b4c65285c4abccf05256ae071e11c19b9c29ca8ebaa1e18ebfad15f.scope. Apr 12 18:22:29.302912 systemd[1]: Started cri-containerd-1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b.scope. Apr 12 18:22:29.400325 env[1593]: time="2024-04-12T18:22:29.400250066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bdgjk,Uid:9a609aad-a90d-41cc-81de-f04da6609c50,Namespace:kube-system,Attempt:0,} returns sandbox id \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\"" Apr 12 18:22:29.405752 env[1593]: time="2024-04-12T18:22:29.405679876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgzj5,Uid:2bb14686-8746-4c88-a167-cc594467e4af,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2b9ec175b4c65285c4abccf05256ae071e11c19b9c29ca8ebaa1e18ebfad15f\"" Apr 12 18:22:29.418399 env[1593]: time="2024-04-12T18:22:29.418320915Z" level=info msg="CreateContainer within sandbox \"d2b9ec175b4c65285c4abccf05256ae071e11c19b9c29ca8ebaa1e18ebfad15f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:22:29.451808 env[1593]: time="2024-04-12T18:22:29.451680203Z" level=info msg="CreateContainer within sandbox \"d2b9ec175b4c65285c4abccf05256ae071e11c19b9c29ca8ebaa1e18ebfad15f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92c024800480048bcf1099d8a909e234fc492faac23f1ce1fc291e6beec920c4\"" Apr 12 18:22:29.457840 env[1593]: time="2024-04-12T18:22:29.457736149Z" level=info msg="StartContainer for \"92c024800480048bcf1099d8a909e234fc492faac23f1ce1fc291e6beec920c4\"" Apr 12 18:22:29.504813 systemd[1]: Started cri-containerd-92c024800480048bcf1099d8a909e234fc492faac23f1ce1fc291e6beec920c4.scope. Apr 12 18:22:29.606389 env[1593]: time="2024-04-12T18:22:29.606310149Z" level=info msg="StartContainer for \"92c024800480048bcf1099d8a909e234fc492faac23f1ce1fc291e6beec920c4\" returns successfully" Apr 12 18:22:30.372268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626061339.mount: Deactivated successfully. Apr 12 18:22:30.417209 kubelet[2764]: I0412 18:22:30.417130 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mgzj5" podStartSLOduration=3.417024582 podStartE2EDuration="3.417024582s" podCreationTimestamp="2024-04-12 18:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:22:30.414637564 +0000 UTC m=+16.581883139" watchObservedRunningTime="2024-04-12 18:22:30.417024582 +0000 UTC m=+16.584306603" Apr 12 18:22:31.714058 env[1593]: time="2024-04-12T18:22:31.713937605Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:31.718770 env[1593]: time="2024-04-12T18:22:31.718679064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:31.724374 env[1593]: time="2024-04-12T18:22:31.724284919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:31.725908 env[1593]: time="2024-04-12T18:22:31.725834672Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 12 18:22:31.734052 env[1593]: time="2024-04-12T18:22:31.730837866Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:22:31.734950 env[1593]: time="2024-04-12T18:22:31.734873867Z" level=info msg="CreateContainer within sandbox \"ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:22:31.758074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355097655.mount: Deactivated successfully. Apr 12 18:22:31.775315 env[1593]: time="2024-04-12T18:22:31.775222763Z" level=info msg="CreateContainer within sandbox \"ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\"" Apr 12 18:22:31.778991 env[1593]: time="2024-04-12T18:22:31.776823987Z" level=info msg="StartContainer for \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\"" Apr 12 18:22:31.821929 systemd[1]: Started cri-containerd-4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457.scope. Apr 12 18:22:31.913521 env[1593]: time="2024-04-12T18:22:31.913404073Z" level=info msg="StartContainer for \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\" returns successfully" Apr 12 18:22:32.752195 systemd[1]: run-containerd-runc-k8s.io-4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457-runc.tSyqFY.mount: Deactivated successfully. Apr 12 18:22:34.402326 kubelet[2764]: I0412 18:22:34.402240 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-btntv" podStartSLOduration=4.862864398 podStartE2EDuration="7.402107467s" podCreationTimestamp="2024-04-12 18:22:27 +0000 UTC" firstStartedPulling="2024-04-12 18:22:29.187604302 +0000 UTC m=+15.354849877" lastFinishedPulling="2024-04-12 18:22:31.726847359 +0000 UTC m=+17.894092946" observedRunningTime="2024-04-12 18:22:32.489142594 +0000 UTC m=+18.656388181" watchObservedRunningTime="2024-04-12 18:22:34.402107467 +0000 UTC m=+20.569353054" Apr 12 18:22:39.684860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758836128.mount: Deactivated successfully. Apr 12 18:22:44.101847 env[1593]: time="2024-04-12T18:22:44.101774935Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:44.105477 env[1593]: time="2024-04-12T18:22:44.105416596Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:44.108902 env[1593]: time="2024-04-12T18:22:44.108787046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:22:44.110540 env[1593]: time="2024-04-12T18:22:44.110464032Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 12 18:22:44.119701 env[1593]: time="2024-04-12T18:22:44.119611753Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:22:44.141998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060535347.mount: Deactivated successfully. Apr 12 18:22:44.159633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961242423.mount: Deactivated successfully. Apr 12 18:22:44.169280 env[1593]: time="2024-04-12T18:22:44.169207715Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\"" Apr 12 18:22:44.173798 env[1593]: time="2024-04-12T18:22:44.173023047Z" level=info msg="StartContainer for \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\"" Apr 12 18:22:44.217645 systemd[1]: Started cri-containerd-844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3.scope. Apr 12 18:22:44.290361 env[1593]: time="2024-04-12T18:22:44.290273654Z" level=info msg="StartContainer for \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\" returns successfully" Apr 12 18:22:44.313632 systemd[1]: cri-containerd-844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3.scope: Deactivated successfully. Apr 12 18:22:44.984741 env[1593]: time="2024-04-12T18:22:44.984651093Z" level=info msg="shim disconnected" id=844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3 Apr 12 18:22:44.985155 env[1593]: time="2024-04-12T18:22:44.984875238Z" level=warning msg="cleaning up after shim disconnected" id=844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3 namespace=k8s.io Apr 12 18:22:44.985155 env[1593]: time="2024-04-12T18:22:44.984940881Z" level=info msg="cleaning up dead shim" Apr 12 18:22:45.013116 env[1593]: time="2024-04-12T18:22:45.012995690Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:22:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3211 runtime=io.containerd.runc.v2\n" Apr 12 18:22:45.136688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3-rootfs.mount: Deactivated successfully. Apr 12 18:22:45.467847 env[1593]: time="2024-04-12T18:22:45.467717160Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:22:45.502885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363448110.mount: Deactivated successfully. Apr 12 18:22:45.512998 env[1593]: time="2024-04-12T18:22:45.512835298Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\"" Apr 12 18:22:45.520472 env[1593]: time="2024-04-12T18:22:45.520380299Z" level=info msg="StartContainer for \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\"" Apr 12 18:22:45.568024 systemd[1]: Started cri-containerd-c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161.scope. Apr 12 18:22:45.645993 env[1593]: time="2024-04-12T18:22:45.645906486Z" level=info msg="StartContainer for \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\" returns successfully" Apr 12 18:22:45.670015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:22:45.670785 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:22:45.672987 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:22:45.680282 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:22:45.681336 systemd[1]: cri-containerd-c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161.scope: Deactivated successfully. Apr 12 18:22:45.701662 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:22:45.744116 env[1593]: time="2024-04-12T18:22:45.742674045Z" level=info msg="shim disconnected" id=c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161 Apr 12 18:22:45.744116 env[1593]: time="2024-04-12T18:22:45.742754413Z" level=warning msg="cleaning up after shim disconnected" id=c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161 namespace=k8s.io Apr 12 18:22:45.744116 env[1593]: time="2024-04-12T18:22:45.742779254Z" level=info msg="cleaning up dead shim" Apr 12 18:22:45.760067 env[1593]: time="2024-04-12T18:22:45.759951366Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:22:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3275 runtime=io.containerd.runc.v2\n" Apr 12 18:22:46.135888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161-rootfs.mount: Deactivated successfully. Apr 12 18:22:46.472858 env[1593]: time="2024-04-12T18:22:46.472178020Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:22:46.532991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217228793.mount: Deactivated successfully. Apr 12 18:22:46.534280 env[1593]: time="2024-04-12T18:22:46.534152262Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\"" Apr 12 18:22:46.537121 env[1593]: time="2024-04-12T18:22:46.535784820Z" level=info msg="StartContainer for \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\"" Apr 12 18:22:46.577900 systemd[1]: Started cri-containerd-9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67.scope. Apr 12 18:22:46.662236 env[1593]: time="2024-04-12T18:22:46.662148991Z" level=info msg="StartContainer for \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\" returns successfully" Apr 12 18:22:46.667658 systemd[1]: cri-containerd-9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67.scope: Deactivated successfully. Apr 12 18:22:46.726640 env[1593]: time="2024-04-12T18:22:46.725683813Z" level=info msg="shim disconnected" id=9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67 Apr 12 18:22:46.727203 env[1593]: time="2024-04-12T18:22:46.727123464Z" level=warning msg="cleaning up after shim disconnected" id=9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67 namespace=k8s.io Apr 12 18:22:46.727513 env[1593]: time="2024-04-12T18:22:46.727438033Z" level=info msg="cleaning up dead shim" Apr 12 18:22:46.744444 env[1593]: time="2024-04-12T18:22:46.744357728Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:22:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3332 runtime=io.containerd.runc.v2\n" Apr 12 18:22:47.488080 env[1593]: time="2024-04-12T18:22:47.487959739Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:22:47.529100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2064576196.mount: Deactivated successfully. Apr 12 18:22:47.552454 env[1593]: time="2024-04-12T18:22:47.552324433Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\"" Apr 12 18:22:47.555088 env[1593]: time="2024-04-12T18:22:47.554262427Z" level=info msg="StartContainer for \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\"" Apr 12 18:22:47.592892 systemd[1]: Started cri-containerd-60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9.scope. Apr 12 18:22:47.670558 systemd[1]: cri-containerd-60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9.scope: Deactivated successfully. Apr 12 18:22:47.673311 env[1593]: time="2024-04-12T18:22:47.671733260Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a609aad_a90d_41cc_81de_f04da6609c50.slice/cri-containerd-60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9.scope/memory.events\": no such file or directory" Apr 12 18:22:47.677451 env[1593]: time="2024-04-12T18:22:47.677379356Z" level=info msg="StartContainer for \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\" returns successfully" Apr 12 18:22:47.725755 env[1593]: time="2024-04-12T18:22:47.725663845Z" level=info msg="shim disconnected" id=60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9 Apr 12 18:22:47.727393 env[1593]: time="2024-04-12T18:22:47.727315747Z" level=warning msg="cleaning up after shim disconnected" id=60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9 namespace=k8s.io Apr 12 18:22:47.727767 env[1593]: time="2024-04-12T18:22:47.727675462Z" level=info msg="cleaning up dead shim" Apr 12 18:22:47.746797 env[1593]: time="2024-04-12T18:22:47.746568592Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:22:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3390 runtime=io.containerd.runc.v2\n" Apr 12 18:22:48.497560 env[1593]: time="2024-04-12T18:22:48.497490420Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:22:48.541749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3595853064.mount: Deactivated successfully. Apr 12 18:22:48.549141 env[1593]: time="2024-04-12T18:22:48.548998189Z" level=info msg="CreateContainer within sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\"" Apr 12 18:22:48.551166 env[1593]: time="2024-04-12T18:22:48.550335714Z" level=info msg="StartContainer for \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\"" Apr 12 18:22:48.605982 systemd[1]: Started cri-containerd-a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943.scope. Apr 12 18:22:48.681450 env[1593]: time="2024-04-12T18:22:48.681356601Z" level=info msg="StartContainer for \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\" returns successfully" Apr 12 18:22:48.929256 kubelet[2764]: I0412 18:22:48.929188 2764 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 12 18:22:48.950087 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:22:48.973763 kubelet[2764]: I0412 18:22:48.973685 2764 topology_manager.go:215] "Topology Admit Handler" podUID="8bddbc8f-8806-46cf-ac17-6ce9152b7449" podNamespace="kube-system" podName="coredns-76f75df574-8qhj6" Apr 12 18:22:48.982455 kubelet[2764]: W0412 18:22:48.982370 2764 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:48.982455 kubelet[2764]: E0412 18:22:48.982456 2764 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-247" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-247' and this object Apr 12 18:22:48.984993 kubelet[2764]: I0412 18:22:48.984112 2764 topology_manager.go:215] "Topology Admit Handler" podUID="f7374bce-7fe4-4135-9057-0589b1c335c3" podNamespace="kube-system" podName="coredns-76f75df574-x4sdx" Apr 12 18:22:48.988443 systemd[1]: Created slice kubepods-burstable-pod8bddbc8f_8806_46cf_ac17_6ce9152b7449.slice. Apr 12 18:22:49.008811 systemd[1]: Created slice kubepods-burstable-podf7374bce_7fe4_4135_9057_0589b1c335c3.slice. Apr 12 18:22:49.022769 kubelet[2764]: I0412 18:22:49.022679 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bddbc8f-8806-46cf-ac17-6ce9152b7449-config-volume\") pod \"coredns-76f75df574-8qhj6\" (UID: \"8bddbc8f-8806-46cf-ac17-6ce9152b7449\") " pod="kube-system/coredns-76f75df574-8qhj6" Apr 12 18:22:49.022769 kubelet[2764]: I0412 18:22:49.022779 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqnpv\" (UniqueName: \"kubernetes.io/projected/8bddbc8f-8806-46cf-ac17-6ce9152b7449-kube-api-access-xqnpv\") pod \"coredns-76f75df574-8qhj6\" (UID: \"8bddbc8f-8806-46cf-ac17-6ce9152b7449\") " pod="kube-system/coredns-76f75df574-8qhj6" Apr 12 18:22:49.123706 kubelet[2764]: I0412 18:22:49.123612 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7374bce-7fe4-4135-9057-0589b1c335c3-config-volume\") pod \"coredns-76f75df574-x4sdx\" (UID: \"f7374bce-7fe4-4135-9057-0589b1c335c3\") " pod="kube-system/coredns-76f75df574-x4sdx" Apr 12 18:22:49.123949 kubelet[2764]: I0412 18:22:49.123844 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4t8\" (UniqueName: \"kubernetes.io/projected/f7374bce-7fe4-4135-9057-0589b1c335c3-kube-api-access-st4t8\") pod \"coredns-76f75df574-x4sdx\" (UID: \"f7374bce-7fe4-4135-9057-0589b1c335c3\") " pod="kube-system/coredns-76f75df574-x4sdx" Apr 12 18:22:49.937082 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Apr 12 18:22:50.124799 kubelet[2764]: E0412 18:22:50.124737 2764 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:50.126023 kubelet[2764]: E0412 18:22:50.125974 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8bddbc8f-8806-46cf-ac17-6ce9152b7449-config-volume podName:8bddbc8f-8806-46cf-ac17-6ce9152b7449 nodeName:}" failed. No retries permitted until 2024-04-12 18:22:50.625929194 +0000 UTC m=+36.793174769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8bddbc8f-8806-46cf-ac17-6ce9152b7449-config-volume") pod "coredns-76f75df574-8qhj6" (UID: "8bddbc8f-8806-46cf-ac17-6ce9152b7449") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:50.226576 kubelet[2764]: E0412 18:22:50.226389 2764 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:50.226576 kubelet[2764]: E0412 18:22:50.226525 2764 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7374bce-7fe4-4135-9057-0589b1c335c3-config-volume podName:f7374bce-7fe4-4135-9057-0589b1c335c3 nodeName:}" failed. No retries permitted until 2024-04-12 18:22:50.7264931 +0000 UTC m=+36.893738663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f7374bce-7fe4-4135-9057-0589b1c335c3-config-volume") pod "coredns-76f75df574-x4sdx" (UID: "f7374bce-7fe4-4135-9057-0589b1c335c3") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:22:50.803002 env[1593]: time="2024-04-12T18:22:50.802874908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8qhj6,Uid:8bddbc8f-8806-46cf-ac17-6ce9152b7449,Namespace:kube-system,Attempt:0,}" Apr 12 18:22:50.819655 env[1593]: time="2024-04-12T18:22:50.819145118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x4sdx,Uid:f7374bce-7fe4-4135-9057-0589b1c335c3,Namespace:kube-system,Attempt:0,}" Apr 12 18:22:51.788380 systemd-networkd[1403]: cilium_host: Link UP Apr 12 18:22:51.790324 systemd-networkd[1403]: cilium_net: Link UP Apr 12 18:22:51.796739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:22:51.796937 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:22:51.796967 systemd-networkd[1403]: cilium_net: Gained carrier Apr 12 18:22:51.798403 systemd-networkd[1403]: cilium_host: Gained carrier Apr 12 18:22:51.800798 (udev-worker)[3494]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:22:51.803662 (udev-worker)[3554]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:22:52.015362 (udev-worker)[3560]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:22:52.025996 systemd-networkd[1403]: cilium_vxlan: Link UP Apr 12 18:22:52.026016 systemd-networkd[1403]: cilium_vxlan: Gained carrier Apr 12 18:22:52.027293 systemd-networkd[1403]: cilium_net: Gained IPv6LL Apr 12 18:22:52.161285 systemd-networkd[1403]: cilium_host: Gained IPv6LL Apr 12 18:22:52.622090 kernel: NET: Registered PF_ALG protocol family Apr 12 18:22:53.681275 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Apr 12 18:22:54.315095 (udev-worker)[3561]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:22:54.321338 systemd-networkd[1403]: lxc_health: Link UP Apr 12 18:22:54.340247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:22:54.339928 systemd-networkd[1403]: lxc_health: Gained carrier Apr 12 18:22:54.920202 systemd-networkd[1403]: lxc22d42874a5a9: Link UP Apr 12 18:22:54.928125 kernel: eth0: renamed from tmp8ae07 Apr 12 18:22:54.940617 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc22d42874a5a9: link becomes ready Apr 12 18:22:54.939951 systemd-networkd[1403]: lxc22d42874a5a9: Gained carrier Apr 12 18:22:54.965485 systemd-networkd[1403]: lxc7e2ef62f8e13: Link UP Apr 12 18:22:54.979883 kernel: eth0: renamed from tmpfee94 Apr 12 18:22:54.992103 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7e2ef62f8e13: link becomes ready Apr 12 18:22:54.993458 systemd-networkd[1403]: lxc7e2ef62f8e13: Gained carrier Apr 12 18:22:54.993601 (udev-worker)[3894]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:22:55.250466 kubelet[2764]: I0412 18:22:55.250270 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bdgjk" podStartSLOduration=13.543081802 podStartE2EDuration="28.250175467s" podCreationTimestamp="2024-04-12 18:22:27 +0000 UTC" firstStartedPulling="2024-04-12 18:22:29.404170513 +0000 UTC m=+15.571416088" lastFinishedPulling="2024-04-12 18:22:44.111264178 +0000 UTC m=+30.278509753" observedRunningTime="2024-04-12 18:22:49.54738894 +0000 UTC m=+35.714634563" watchObservedRunningTime="2024-04-12 18:22:55.250175467 +0000 UTC m=+41.417421066" Apr 12 18:22:55.857959 systemd-networkd[1403]: lxc_health: Gained IPv6LL Apr 12 18:22:56.241947 systemd-networkd[1403]: lxc22d42874a5a9: Gained IPv6LL Apr 12 18:22:56.433778 systemd-networkd[1403]: lxc7e2ef62f8e13: Gained IPv6LL Apr 12 18:23:02.611178 systemd[1]: Started sshd@5-172.31.18.247:22-139.178.89.65:50552.service. Apr 12 18:23:02.807005 sshd[3921]: Accepted publickey for core from 139.178.89.65 port 50552 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:02.809461 sshd[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:02.819296 systemd-logind[1581]: New session 6 of user core. Apr 12 18:23:02.821750 systemd[1]: Started session-6.scope. Apr 12 18:23:03.163900 sshd[3921]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:03.170023 systemd-logind[1581]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:23:03.170792 systemd[1]: sshd@5-172.31.18.247:22-139.178.89.65:50552.service: Deactivated successfully. Apr 12 18:23:03.173393 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:23:03.177651 systemd-logind[1581]: Removed session 6. Apr 12 18:23:05.839326 env[1593]: time="2024-04-12T18:23:05.839139562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:05.839326 env[1593]: time="2024-04-12T18:23:05.839240386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:05.839326 env[1593]: time="2024-04-12T18:23:05.839269342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:05.840896 env[1593]: time="2024-04-12T18:23:05.840765270Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fee94ee0b7e35142aa29ec080fd23b93a96837ed9efe579351691fd2cb49440b pid=3947 runtime=io.containerd.runc.v2 Apr 12 18:23:05.912267 systemd[1]: Started cri-containerd-fee94ee0b7e35142aa29ec080fd23b93a96837ed9efe579351691fd2cb49440b.scope. Apr 12 18:23:05.998650 env[1593]: time="2024-04-12T18:23:05.998481825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:23:05.998860 env[1593]: time="2024-04-12T18:23:05.998684109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:23:05.998860 env[1593]: time="2024-04-12T18:23:05.998792444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:23:05.999398 env[1593]: time="2024-04-12T18:23:05.999254179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ae07183fe6950807354cd3372ce6238b11174da30e057eadb7eb1c0e1cc1262 pid=3980 runtime=io.containerd.runc.v2 Apr 12 18:23:06.043214 systemd[1]: Started cri-containerd-8ae07183fe6950807354cd3372ce6238b11174da30e057eadb7eb1c0e1cc1262.scope. Apr 12 18:23:06.116080 env[1593]: time="2024-04-12T18:23:06.115855722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-x4sdx,Uid:f7374bce-7fe4-4135-9057-0589b1c335c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fee94ee0b7e35142aa29ec080fd23b93a96837ed9efe579351691fd2cb49440b\"" Apr 12 18:23:06.130585 env[1593]: time="2024-04-12T18:23:06.130516042Z" level=info msg="CreateContainer within sandbox \"fee94ee0b7e35142aa29ec080fd23b93a96837ed9efe579351691fd2cb49440b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:23:06.168417 env[1593]: time="2024-04-12T18:23:06.168304493Z" level=info msg="CreateContainer within sandbox \"fee94ee0b7e35142aa29ec080fd23b93a96837ed9efe579351691fd2cb49440b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09c53e24e05be7d6b2494cb37c24083e933fb72b1e327825562e5c8108cb7bce\"" Apr 12 18:23:06.173321 env[1593]: time="2024-04-12T18:23:06.173220742Z" level=info msg="StartContainer for \"09c53e24e05be7d6b2494cb37c24083e933fb72b1e327825562e5c8108cb7bce\"" Apr 12 18:23:06.240334 systemd[1]: Started cri-containerd-09c53e24e05be7d6b2494cb37c24083e933fb72b1e327825562e5c8108cb7bce.scope. Apr 12 18:23:06.273554 env[1593]: time="2024-04-12T18:23:06.273468373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8qhj6,Uid:8bddbc8f-8806-46cf-ac17-6ce9152b7449,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ae07183fe6950807354cd3372ce6238b11174da30e057eadb7eb1c0e1cc1262\"" Apr 12 18:23:06.280768 env[1593]: time="2024-04-12T18:23:06.280701627Z" level=info msg="CreateContainer within sandbox \"8ae07183fe6950807354cd3372ce6238b11174da30e057eadb7eb1c0e1cc1262\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:23:06.315321 env[1593]: time="2024-04-12T18:23:06.315233235Z" level=info msg="CreateContainer within sandbox \"8ae07183fe6950807354cd3372ce6238b11174da30e057eadb7eb1c0e1cc1262\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da10e4b5e686ee0e0a31b7ab64f5602030f28cab1f00d18a30e811a7153ac0ee\"" Apr 12 18:23:06.316853 env[1593]: time="2024-04-12T18:23:06.316776816Z" level=info msg="StartContainer for \"da10e4b5e686ee0e0a31b7ab64f5602030f28cab1f00d18a30e811a7153ac0ee\"" Apr 12 18:23:06.370376 env[1593]: time="2024-04-12T18:23:06.370180965Z" level=info msg="StartContainer for \"09c53e24e05be7d6b2494cb37c24083e933fb72b1e327825562e5c8108cb7bce\" returns successfully" Apr 12 18:23:06.386772 systemd[1]: Started cri-containerd-da10e4b5e686ee0e0a31b7ab64f5602030f28cab1f00d18a30e811a7153ac0ee.scope. Apr 12 18:23:06.509886 env[1593]: time="2024-04-12T18:23:06.509782134Z" level=info msg="StartContainer for \"da10e4b5e686ee0e0a31b7ab64f5602030f28cab1f00d18a30e811a7153ac0ee\" returns successfully" Apr 12 18:23:06.635785 kubelet[2764]: I0412 18:23:06.635542 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8qhj6" podStartSLOduration=39.635440197 podStartE2EDuration="39.635440197s" podCreationTimestamp="2024-04-12 18:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:23:06.633209005 +0000 UTC m=+52.800454604" watchObservedRunningTime="2024-04-12 18:23:06.635440197 +0000 UTC m=+52.802685796" Apr 12 18:23:06.854794 systemd[1]: run-containerd-runc-k8s.io-8ae07183fe6950807354cd3372ce6238b11174da30e057eadb7eb1c0e1cc1262-runc.Fp29Vi.mount: Deactivated successfully. Apr 12 18:23:07.611279 kubelet[2764]: I0412 18:23:07.611201 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-x4sdx" podStartSLOduration=40.611130213 podStartE2EDuration="40.611130213s" podCreationTimestamp="2024-04-12 18:22:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:23:06.67640796 +0000 UTC m=+52.843653559" watchObservedRunningTime="2024-04-12 18:23:07.611130213 +0000 UTC m=+53.778375812" Apr 12 18:23:08.195601 systemd[1]: Started sshd@6-172.31.18.247:22-139.178.89.65:35552.service. Apr 12 18:23:08.373743 sshd[4106]: Accepted publickey for core from 139.178.89.65 port 35552 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:08.376829 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:08.385788 systemd-logind[1581]: New session 7 of user core. Apr 12 18:23:08.387697 systemd[1]: Started session-7.scope. Apr 12 18:23:08.674211 sshd[4106]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:08.680570 systemd-logind[1581]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:23:08.681394 systemd[1]: sshd@6-172.31.18.247:22-139.178.89.65:35552.service: Deactivated successfully. Apr 12 18:23:08.682943 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:23:08.687481 systemd-logind[1581]: Removed session 7. Apr 12 18:23:13.705556 systemd[1]: Started sshd@7-172.31.18.247:22-139.178.89.65:35554.service. Apr 12 18:23:13.878148 sshd[4120]: Accepted publickey for core from 139.178.89.65 port 35554 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:13.881473 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:13.895130 systemd[1]: Started session-8.scope. Apr 12 18:23:13.896378 systemd-logind[1581]: New session 8 of user core. Apr 12 18:23:14.174309 sshd[4120]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:14.180554 systemd[1]: sshd@7-172.31.18.247:22-139.178.89.65:35554.service: Deactivated successfully. Apr 12 18:23:14.182752 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:23:14.185631 systemd-logind[1581]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:23:14.187643 systemd-logind[1581]: Removed session 8. Apr 12 18:23:19.206234 systemd[1]: Started sshd@8-172.31.18.247:22-139.178.89.65:34910.service. Apr 12 18:23:19.375325 sshd[4135]: Accepted publickey for core from 139.178.89.65 port 34910 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:19.378946 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:19.388749 systemd-logind[1581]: New session 9 of user core. Apr 12 18:23:19.390734 systemd[1]: Started session-9.scope. Apr 12 18:23:19.699205 sshd[4135]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:19.706557 systemd-logind[1581]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:23:19.706776 systemd[1]: sshd@8-172.31.18.247:22-139.178.89.65:34910.service: Deactivated successfully. Apr 12 18:23:19.708381 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:23:19.711304 systemd-logind[1581]: Removed session 9. Apr 12 18:23:24.734325 systemd[1]: Started sshd@9-172.31.18.247:22-139.178.89.65:34922.service. Apr 12 18:23:24.913812 sshd[4148]: Accepted publickey for core from 139.178.89.65 port 34922 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:24.916799 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:24.927802 systemd-logind[1581]: New session 10 of user core. Apr 12 18:23:24.929193 systemd[1]: Started session-10.scope. Apr 12 18:23:25.218406 sshd[4148]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:25.224929 systemd-logind[1581]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:23:25.225585 systemd[1]: sshd@9-172.31.18.247:22-139.178.89.65:34922.service: Deactivated successfully. Apr 12 18:23:25.227802 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:23:25.230832 systemd-logind[1581]: Removed session 10. Apr 12 18:23:25.251232 systemd[1]: Started sshd@10-172.31.18.247:22-139.178.89.65:34926.service. Apr 12 18:23:25.425586 sshd[4161]: Accepted publickey for core from 139.178.89.65 port 34926 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:25.429642 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:25.440173 systemd-logind[1581]: New session 11 of user core. Apr 12 18:23:25.440733 systemd[1]: Started session-11.scope. Apr 12 18:23:25.801205 sshd[4161]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:25.809930 systemd[1]: sshd@10-172.31.18.247:22-139.178.89.65:34926.service: Deactivated successfully. Apr 12 18:23:25.811492 systemd-logind[1581]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:23:25.812363 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:23:25.816353 systemd-logind[1581]: Removed session 11. Apr 12 18:23:25.831987 systemd[1]: Started sshd@11-172.31.18.247:22-139.178.89.65:34930.service. Apr 12 18:23:26.012573 sshd[4171]: Accepted publickey for core from 139.178.89.65 port 34930 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:26.015616 sshd[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:26.024841 systemd-logind[1581]: New session 12 of user core. Apr 12 18:23:26.027398 systemd[1]: Started session-12.scope. Apr 12 18:23:26.369544 sshd[4171]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:26.376542 systemd[1]: sshd@11-172.31.18.247:22-139.178.89.65:34930.service: Deactivated successfully. Apr 12 18:23:26.378232 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:23:26.379839 systemd-logind[1581]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:23:26.383579 systemd-logind[1581]: Removed session 12. Apr 12 18:23:31.403908 systemd[1]: Started sshd@12-172.31.18.247:22-139.178.89.65:41264.service. Apr 12 18:23:31.579584 sshd[4187]: Accepted publickey for core from 139.178.89.65 port 41264 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:31.583353 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:31.592808 systemd-logind[1581]: New session 13 of user core. Apr 12 18:23:31.594712 systemd[1]: Started session-13.scope. Apr 12 18:23:31.876331 sshd[4187]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:31.881737 systemd-logind[1581]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:23:31.881892 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:23:31.884212 systemd[1]: sshd@12-172.31.18.247:22-139.178.89.65:41264.service: Deactivated successfully. Apr 12 18:23:31.886891 systemd-logind[1581]: Removed session 13. Apr 12 18:23:36.909413 systemd[1]: Started sshd@13-172.31.18.247:22-139.178.89.65:41266.service. Apr 12 18:23:37.085894 sshd[4200]: Accepted publickey for core from 139.178.89.65 port 41266 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:37.090243 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:37.100510 systemd-logind[1581]: New session 14 of user core. Apr 12 18:23:37.101601 systemd[1]: Started session-14.scope. Apr 12 18:23:37.376754 sshd[4200]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:37.383357 systemd-logind[1581]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:23:37.384750 systemd[1]: sshd@13-172.31.18.247:22-139.178.89.65:41266.service: Deactivated successfully. Apr 12 18:23:37.386525 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:23:37.389017 systemd-logind[1581]: Removed session 14. Apr 12 18:23:42.408874 systemd[1]: Started sshd@14-172.31.18.247:22-139.178.89.65:57204.service. Apr 12 18:23:42.588473 sshd[4212]: Accepted publickey for core from 139.178.89.65 port 57204 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:42.592223 sshd[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:42.603305 systemd-logind[1581]: New session 15 of user core. Apr 12 18:23:42.603852 systemd[1]: Started session-15.scope. Apr 12 18:23:42.890480 sshd[4212]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:42.897578 systemd[1]: sshd@14-172.31.18.247:22-139.178.89.65:57204.service: Deactivated successfully. Apr 12 18:23:42.897812 systemd-logind[1581]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:23:42.899171 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:23:42.902613 systemd-logind[1581]: Removed session 15. Apr 12 18:23:42.919830 systemd[1]: Started sshd@15-172.31.18.247:22-139.178.89.65:57220.service. Apr 12 18:23:43.091009 sshd[4224]: Accepted publickey for core from 139.178.89.65 port 57220 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:43.096146 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:43.106333 systemd-logind[1581]: New session 16 of user core. Apr 12 18:23:43.107776 systemd[1]: Started session-16.scope. Apr 12 18:23:43.448490 sshd[4224]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:43.454722 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:23:43.456773 systemd-logind[1581]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:23:43.457215 systemd[1]: sshd@15-172.31.18.247:22-139.178.89.65:57220.service: Deactivated successfully. Apr 12 18:23:43.461231 systemd-logind[1581]: Removed session 16. Apr 12 18:23:43.479517 systemd[1]: Started sshd@16-172.31.18.247:22-139.178.89.65:57226.service. Apr 12 18:23:43.652947 sshd[4234]: Accepted publickey for core from 139.178.89.65 port 57226 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:43.656755 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:43.669638 systemd[1]: Started session-17.scope. Apr 12 18:23:43.670901 systemd-logind[1581]: New session 17 of user core. Apr 12 18:23:46.477310 sshd[4234]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:46.486293 systemd-logind[1581]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:23:46.486721 systemd[1]: sshd@16-172.31.18.247:22-139.178.89.65:57226.service: Deactivated successfully. Apr 12 18:23:46.488721 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:23:46.489421 systemd[1]: session-17.scope: Consumed 1.050s CPU time. Apr 12 18:23:46.493064 systemd-logind[1581]: Removed session 17. Apr 12 18:23:46.510942 systemd[1]: Started sshd@17-172.31.18.247:22-139.178.89.65:57240.service. Apr 12 18:23:46.687936 sshd[4251]: Accepted publickey for core from 139.178.89.65 port 57240 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:46.691975 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:46.701899 systemd-logind[1581]: New session 18 of user core. Apr 12 18:23:46.704906 systemd[1]: Started session-18.scope. Apr 12 18:23:47.245861 sshd[4251]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:47.253193 systemd-logind[1581]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:23:47.253448 systemd[1]: sshd@17-172.31.18.247:22-139.178.89.65:57240.service: Deactivated successfully. Apr 12 18:23:47.254952 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:23:47.258321 systemd-logind[1581]: Removed session 18. Apr 12 18:23:47.277785 systemd[1]: Started sshd@18-172.31.18.247:22-139.178.89.65:57932.service. Apr 12 18:23:47.451791 sshd[4261]: Accepted publickey for core from 139.178.89.65 port 57932 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:47.458077 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:47.467383 systemd-logind[1581]: New session 19 of user core. Apr 12 18:23:47.469543 systemd[1]: Started session-19.scope. Apr 12 18:23:47.745161 sshd[4261]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:47.750863 systemd[1]: sshd@18-172.31.18.247:22-139.178.89.65:57932.service: Deactivated successfully. Apr 12 18:23:47.752707 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:23:47.754506 systemd-logind[1581]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:23:47.757789 systemd-logind[1581]: Removed session 19. Apr 12 18:23:52.775778 systemd[1]: Started sshd@19-172.31.18.247:22-139.178.89.65:57938.service. Apr 12 18:23:52.950413 sshd[4273]: Accepted publickey for core from 139.178.89.65 port 57938 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:52.954167 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:52.963181 systemd-logind[1581]: New session 20 of user core. Apr 12 18:23:52.965503 systemd[1]: Started session-20.scope. Apr 12 18:23:53.247502 sshd[4273]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:53.253596 systemd-logind[1581]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:23:53.255272 systemd[1]: sshd@19-172.31.18.247:22-139.178.89.65:57938.service: Deactivated successfully. Apr 12 18:23:53.256808 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:23:53.260264 systemd-logind[1581]: Removed session 20. Apr 12 18:23:58.281838 systemd[1]: Started sshd@20-172.31.18.247:22-139.178.89.65:45536.service. Apr 12 18:23:58.458763 sshd[4288]: Accepted publickey for core from 139.178.89.65 port 45536 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:23:58.462824 sshd[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:23:58.472084 systemd-logind[1581]: New session 21 of user core. Apr 12 18:23:58.473394 systemd[1]: Started session-21.scope. Apr 12 18:23:58.742913 sshd[4288]: pam_unix(sshd:session): session closed for user core Apr 12 18:23:58.751385 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:23:58.752779 systemd[1]: sshd@20-172.31.18.247:22-139.178.89.65:45536.service: Deactivated successfully. Apr 12 18:23:58.755158 systemd-logind[1581]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:23:58.757793 systemd-logind[1581]: Removed session 21. Apr 12 18:24:03.775013 systemd[1]: Started sshd@21-172.31.18.247:22-139.178.89.65:45550.service. Apr 12 18:24:03.950794 sshd[4302]: Accepted publickey for core from 139.178.89.65 port 45550 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:24:03.953843 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:03.963575 systemd-logind[1581]: New session 22 of user core. Apr 12 18:24:03.964728 systemd[1]: Started session-22.scope. Apr 12 18:24:04.235582 sshd[4302]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:04.243973 systemd-logind[1581]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:24:04.244665 systemd[1]: sshd@21-172.31.18.247:22-139.178.89.65:45550.service: Deactivated successfully. Apr 12 18:24:04.246572 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:24:04.248719 systemd-logind[1581]: Removed session 22. Apr 12 18:24:09.266926 systemd[1]: Started sshd@22-172.31.18.247:22-139.178.89.65:39274.service. Apr 12 18:24:09.441531 sshd[4314]: Accepted publickey for core from 139.178.89.65 port 39274 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:24:09.444581 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:09.454292 systemd-logind[1581]: New session 23 of user core. Apr 12 18:24:09.456149 systemd[1]: Started session-23.scope. Apr 12 18:24:09.726846 sshd[4314]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:09.732504 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:24:09.733825 systemd[1]: sshd@22-172.31.18.247:22-139.178.89.65:39274.service: Deactivated successfully. Apr 12 18:24:09.736111 systemd-logind[1581]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:24:09.738824 systemd-logind[1581]: Removed session 23. Apr 12 18:24:09.759441 systemd[1]: Started sshd@23-172.31.18.247:22-139.178.89.65:39282.service. Apr 12 18:24:09.930077 sshd[4326]: Accepted publickey for core from 139.178.89.65 port 39282 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:24:09.933346 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:09.944759 systemd[1]: Started session-24.scope. Apr 12 18:24:09.946450 systemd-logind[1581]: New session 24 of user core. Apr 12 18:24:13.773640 systemd[1]: run-containerd-runc-k8s.io-a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943-runc.Dw6rRY.mount: Deactivated successfully. Apr 12 18:24:13.782562 env[1593]: time="2024-04-12T18:24:13.782461627Z" level=info msg="StopContainer for \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\" with timeout 30 (s)" Apr 12 18:24:13.784068 env[1593]: time="2024-04-12T18:24:13.783971299Z" level=info msg="Stop container \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\" with signal terminated" Apr 12 18:24:13.883093 env[1593]: time="2024-04-12T18:24:13.882395502Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:24:13.931162 env[1593]: time="2024-04-12T18:24:13.930972536Z" level=info msg="StopContainer for \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\" with timeout 2 (s)" Apr 12 18:24:13.931849 env[1593]: time="2024-04-12T18:24:13.931784044Z" level=info msg="Stop container \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\" with signal terminated" Apr 12 18:24:13.933827 systemd[1]: cri-containerd-4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457.scope: Deactivated successfully. Apr 12 18:24:13.965622 systemd-networkd[1403]: lxc_health: Link DOWN Apr 12 18:24:13.965639 systemd-networkd[1403]: lxc_health: Lost carrier Apr 12 18:24:14.012023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457-rootfs.mount: Deactivated successfully. Apr 12 18:24:14.015106 systemd[1]: cri-containerd-a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943.scope: Deactivated successfully. Apr 12 18:24:14.015778 systemd[1]: cri-containerd-a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943.scope: Consumed 18.138s CPU time. Apr 12 18:24:14.034809 env[1593]: time="2024-04-12T18:24:14.034722160Z" level=info msg="shim disconnected" id=4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457 Apr 12 18:24:14.034809 env[1593]: time="2024-04-12T18:24:14.034801650Z" level=warning msg="cleaning up after shim disconnected" id=4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457 namespace=k8s.io Apr 12 18:24:14.035304 env[1593]: time="2024-04-12T18:24:14.034825230Z" level=info msg="cleaning up dead shim" Apr 12 18:24:14.065942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943-rootfs.mount: Deactivated successfully. Apr 12 18:24:14.069585 env[1593]: time="2024-04-12T18:24:14.069505475Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4385 runtime=io.containerd.runc.v2\n" Apr 12 18:24:14.074964 env[1593]: time="2024-04-12T18:24:14.074882242Z" level=info msg="StopContainer for \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\" returns successfully" Apr 12 18:24:14.076687 env[1593]: time="2024-04-12T18:24:14.076513562Z" level=info msg="StopPodSandbox for \"ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d\"" Apr 12 18:24:14.076687 env[1593]: time="2024-04-12T18:24:14.076647245Z" level=info msg="Container to stop \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:14.091004 env[1593]: time="2024-04-12T18:24:14.090909328Z" level=info msg="shim disconnected" id=a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943 Apr 12 18:24:14.091247 env[1593]: time="2024-04-12T18:24:14.091001874Z" level=warning msg="cleaning up after shim disconnected" id=a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943 namespace=k8s.io Apr 12 18:24:14.091247 env[1593]: time="2024-04-12T18:24:14.091056116Z" level=info msg="cleaning up dead shim" Apr 12 18:24:14.093305 systemd[1]: cri-containerd-ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d.scope: Deactivated successfully. Apr 12 18:24:14.114780 env[1593]: time="2024-04-12T18:24:14.114713012Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4412 runtime=io.containerd.runc.v2\n" Apr 12 18:24:14.118754 env[1593]: time="2024-04-12T18:24:14.118681965Z" level=info msg="StopContainer for \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\" returns successfully" Apr 12 18:24:14.119979 env[1593]: time="2024-04-12T18:24:14.119913231Z" level=info msg="StopPodSandbox for \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\"" Apr 12 18:24:14.120475 env[1593]: time="2024-04-12T18:24:14.120326425Z" level=info msg="Container to stop \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:14.120808 env[1593]: time="2024-04-12T18:24:14.120732515Z" level=info msg="Container to stop \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:14.121428 env[1593]: time="2024-04-12T18:24:14.121323697Z" level=info msg="Container to stop \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:14.121428 env[1593]: time="2024-04-12T18:24:14.121394787Z" level=info msg="Container to stop \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:14.121857 env[1593]: time="2024-04-12T18:24:14.121448776Z" level=info msg="Container to stop \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:14.138131 systemd[1]: cri-containerd-1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b.scope: Deactivated successfully. Apr 12 18:24:14.173801 env[1593]: time="2024-04-12T18:24:14.173720813Z" level=info msg="shim disconnected" id=ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d Apr 12 18:24:14.174325 env[1593]: time="2024-04-12T18:24:14.174230130Z" level=warning msg="cleaning up after shim disconnected" id=ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d namespace=k8s.io Apr 12 18:24:14.174580 env[1593]: time="2024-04-12T18:24:14.174530161Z" level=info msg="cleaning up dead shim" Apr 12 18:24:14.207079 env[1593]: time="2024-04-12T18:24:14.206952299Z" level=info msg="shim disconnected" id=1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b Apr 12 18:24:14.207079 env[1593]: time="2024-04-12T18:24:14.207057445Z" level=warning msg="cleaning up after shim disconnected" id=1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b namespace=k8s.io Apr 12 18:24:14.207468 env[1593]: time="2024-04-12T18:24:14.207087050Z" level=info msg="cleaning up dead shim" Apr 12 18:24:14.214908 env[1593]: time="2024-04-12T18:24:14.214835123Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4451 runtime=io.containerd.runc.v2\n" Apr 12 18:24:14.216171 env[1593]: time="2024-04-12T18:24:14.216004287Z" level=info msg="TearDown network for sandbox \"ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d\" successfully" Apr 12 18:24:14.218922 env[1593]: time="2024-04-12T18:24:14.217550309Z" level=info msg="StopPodSandbox for \"ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d\" returns successfully" Apr 12 18:24:14.255395 env[1593]: time="2024-04-12T18:24:14.255302040Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4470 runtime=io.containerd.runc.v2\n" Apr 12 18:24:14.256000 env[1593]: time="2024-04-12T18:24:14.255911607Z" level=info msg="TearDown network for sandbox \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" successfully" Apr 12 18:24:14.256000 env[1593]: time="2024-04-12T18:24:14.255980261Z" level=info msg="StopPodSandbox for \"1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b\" returns successfully" Apr 12 18:24:14.355763 kubelet[2764]: I0412 18:24:14.355592 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-hostproc\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.355763 kubelet[2764]: I0412 18:24:14.355705 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-host-proc-sys-net\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.355763 kubelet[2764]: I0412 18:24:14.355768 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a609aad-a90d-41cc-81de-f04da6609c50-clustermesh-secrets\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.356596 kubelet[2764]: I0412 18:24:14.355848 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-lib-modules\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.356596 kubelet[2764]: I0412 18:24:14.355898 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-host-proc-sys-kernel\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.356596 kubelet[2764]: I0412 18:24:14.355947 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dce65d8d-6362-4b97-8575-6f19d7f56f90-cilium-config-path\") pod \"dce65d8d-6362-4b97-8575-6f19d7f56f90\" (UID: \"dce65d8d-6362-4b97-8575-6f19d7f56f90\") " Apr 12 18:24:14.356596 kubelet[2764]: I0412 18:24:14.355996 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-config-path\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.356596 kubelet[2764]: I0412 18:24:14.356122 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-cgroup\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.356596 kubelet[2764]: I0412 18:24:14.356171 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-bpf-maps\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.357193 kubelet[2764]: I0412 18:24:14.356211 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-etc-cni-netd\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.357193 kubelet[2764]: I0412 18:24:14.356256 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cni-path\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.357193 kubelet[2764]: I0412 18:24:14.356299 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-xtables-lock\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.357193 kubelet[2764]: I0412 18:24:14.356352 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66vzj\" (UniqueName: \"kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-kube-api-access-66vzj\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.357193 kubelet[2764]: I0412 18:24:14.356399 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8nfd\" (UniqueName: \"kubernetes.io/projected/dce65d8d-6362-4b97-8575-6f19d7f56f90-kube-api-access-z8nfd\") pod \"dce65d8d-6362-4b97-8575-6f19d7f56f90\" (UID: \"dce65d8d-6362-4b97-8575-6f19d7f56f90\") " Apr 12 18:24:14.357193 kubelet[2764]: I0412 18:24:14.356446 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-hubble-tls\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.372727 kubelet[2764]: I0412 18:24:14.372666 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-run\") pod \"9a609aad-a90d-41cc-81de-f04da6609c50\" (UID: \"9a609aad-a90d-41cc-81de-f04da6609c50\") " Apr 12 18:24:14.374739 kubelet[2764]: I0412 18:24:14.372786 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.374739 kubelet[2764]: I0412 18:24:14.373059 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:14.374739 kubelet[2764]: I0412 18:24:14.373141 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:24:14.374739 kubelet[2764]: I0412 18:24:14.361772 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.374739 kubelet[2764]: I0412 18:24:14.361821 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.375203 kubelet[2764]: I0412 18:24:14.361879 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.375203 kubelet[2764]: I0412 18:24:14.361908 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cni-path" (OuterVolumeSpecName: "cni-path") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.375203 kubelet[2764]: I0412 18:24:14.361937 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.375203 kubelet[2764]: I0412 18:24:14.364770 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-hostproc" (OuterVolumeSpecName: "hostproc") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.375203 kubelet[2764]: I0412 18:24:14.364827 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.375537 kubelet[2764]: I0412 18:24:14.373555 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.375746 kubelet[2764]: I0412 18:24:14.372692 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:14.383140 kubelet[2764]: I0412 18:24:14.383078 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a609aad-a90d-41cc-81de-f04da6609c50-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:24:14.389391 kubelet[2764]: I0412 18:24:14.389319 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce65d8d-6362-4b97-8575-6f19d7f56f90-kube-api-access-z8nfd" (OuterVolumeSpecName: "kube-api-access-z8nfd") pod "dce65d8d-6362-4b97-8575-6f19d7f56f90" (UID: "dce65d8d-6362-4b97-8575-6f19d7f56f90"). InnerVolumeSpecName "kube-api-access-z8nfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:14.389715 kubelet[2764]: I0412 18:24:14.389486 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce65d8d-6362-4b97-8575-6f19d7f56f90-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dce65d8d-6362-4b97-8575-6f19d7f56f90" (UID: "dce65d8d-6362-4b97-8575-6f19d7f56f90"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:24:14.391973 kubelet[2764]: I0412 18:24:14.391901 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-kube-api-access-66vzj" (OuterVolumeSpecName: "kube-api-access-66vzj") pod "9a609aad-a90d-41cc-81de-f04da6609c50" (UID: "9a609aad-a90d-41cc-81de-f04da6609c50"). InnerVolumeSpecName "kube-api-access-66vzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:14.473330 kubelet[2764]: I0412 18:24:14.473267 2764 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-host-proc-sys-net\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.473654 kubelet[2764]: I0412 18:24:14.473610 2764 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-hostproc\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.473837 kubelet[2764]: I0412 18:24:14.473814 2764 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-lib-modules\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.474004 kubelet[2764]: I0412 18:24:14.473980 2764 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a609aad-a90d-41cc-81de-f04da6609c50-clustermesh-secrets\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.474221 kubelet[2764]: I0412 18:24:14.474197 2764 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-host-proc-sys-kernel\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.474398 kubelet[2764]: I0412 18:24:14.474377 2764 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dce65d8d-6362-4b97-8575-6f19d7f56f90-cilium-config-path\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.474556 kubelet[2764]: I0412 18:24:14.474536 2764 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-config-path\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.474709 kubelet[2764]: I0412 18:24:14.474689 2764 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-cgroup\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.474855 kubelet[2764]: I0412 18:24:14.474836 2764 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-bpf-maps\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.475014 kubelet[2764]: I0412 18:24:14.474992 2764 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-etc-cni-netd\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.475248 kubelet[2764]: I0412 18:24:14.475212 2764 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cni-path\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.475405 kubelet[2764]: I0412 18:24:14.475384 2764 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-xtables-lock\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.475554 kubelet[2764]: I0412 18:24:14.475533 2764 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-66vzj\" (UniqueName: \"kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-kube-api-access-66vzj\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.475760 kubelet[2764]: I0412 18:24:14.475703 2764 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z8nfd\" (UniqueName: \"kubernetes.io/projected/dce65d8d-6362-4b97-8575-6f19d7f56f90-kube-api-access-z8nfd\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.475913 kubelet[2764]: I0412 18:24:14.475888 2764 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a609aad-a90d-41cc-81de-f04da6609c50-cilium-run\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.476139 kubelet[2764]: I0412 18:24:14.476106 2764 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a609aad-a90d-41cc-81de-f04da6609c50-hubble-tls\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:14.502278 kubelet[2764]: E0412 18:24:14.502219 2764 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:24:14.738771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b-rootfs.mount: Deactivated successfully. Apr 12 18:24:14.740741 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1304c78a7c1f0642686afa415b354ae89209dee4867fdc11f380c7aabe7e1b7b-shm.mount: Deactivated successfully. Apr 12 18:24:14.741320 systemd[1]: var-lib-kubelet-pods-9a609aad\x2da90d\x2d41cc\x2d81de\x2df04da6609c50-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d66vzj.mount: Deactivated successfully. Apr 12 18:24:14.741804 systemd[1]: var-lib-kubelet-pods-9a609aad\x2da90d\x2d41cc\x2d81de\x2df04da6609c50-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:24:14.742397 systemd[1]: var-lib-kubelet-pods-9a609aad\x2da90d\x2d41cc\x2d81de\x2df04da6609c50-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:24:14.742790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d-rootfs.mount: Deactivated successfully. Apr 12 18:24:14.743190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac0111f636c6e599a5b1d352c8e184c483e4ac640d8a01c8271b906da84eb44d-shm.mount: Deactivated successfully. Apr 12 18:24:14.743640 systemd[1]: var-lib-kubelet-pods-dce65d8d\x2d6362\x2d4b97\x2d8575\x2d6f19d7f56f90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz8nfd.mount: Deactivated successfully. Apr 12 18:24:14.806507 kubelet[2764]: I0412 18:24:14.806445 2764 scope.go:117] "RemoveContainer" containerID="4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457" Apr 12 18:24:14.817959 env[1593]: time="2024-04-12T18:24:14.816638981Z" level=info msg="RemoveContainer for \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\"" Apr 12 18:24:14.820019 systemd[1]: Removed slice kubepods-besteffort-poddce65d8d_6362_4b97_8575_6f19d7f56f90.slice. Apr 12 18:24:14.831187 env[1593]: time="2024-04-12T18:24:14.831107422Z" level=info msg="RemoveContainer for \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\" returns successfully" Apr 12 18:24:14.833502 kubelet[2764]: I0412 18:24:14.833456 2764 scope.go:117] "RemoveContainer" containerID="4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457" Apr 12 18:24:14.835020 env[1593]: time="2024-04-12T18:24:14.834581194Z" level=error msg="ContainerStatus for \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\": not found" Apr 12 18:24:14.836486 kubelet[2764]: E0412 18:24:14.836338 2764 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\": not found" containerID="4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457" Apr 12 18:24:14.836985 kubelet[2764]: I0412 18:24:14.836945 2764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457"} err="failed to get container status \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\": rpc error: code = NotFound desc = an error occurred when try to find container \"4462e134b4c654575b45ec62400bb99f797936a3a1c9f0e195142f80ba866457\": not found" Apr 12 18:24:14.837463 kubelet[2764]: I0412 18:24:14.837392 2764 scope.go:117] "RemoveContainer" containerID="a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943" Apr 12 18:24:14.851621 env[1593]: time="2024-04-12T18:24:14.850886975Z" level=info msg="RemoveContainer for \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\"" Apr 12 18:24:14.857767 env[1593]: time="2024-04-12T18:24:14.857676897Z" level=info msg="RemoveContainer for \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\" returns successfully" Apr 12 18:24:14.858423 kubelet[2764]: I0412 18:24:14.858345 2764 scope.go:117] "RemoveContainer" containerID="60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9" Apr 12 18:24:14.859253 systemd[1]: Removed slice kubepods-burstable-pod9a609aad_a90d_41cc_81de_f04da6609c50.slice. Apr 12 18:24:14.859513 systemd[1]: kubepods-burstable-pod9a609aad_a90d_41cc_81de_f04da6609c50.slice: Consumed 18.399s CPU time. Apr 12 18:24:14.865518 env[1593]: time="2024-04-12T18:24:14.865358236Z" level=info msg="RemoveContainer for \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\"" Apr 12 18:24:14.890866 env[1593]: time="2024-04-12T18:24:14.890761223Z" level=info msg="RemoveContainer for \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\" returns successfully" Apr 12 18:24:14.903814 kubelet[2764]: I0412 18:24:14.903748 2764 scope.go:117] "RemoveContainer" containerID="9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67" Apr 12 18:24:14.910401 env[1593]: time="2024-04-12T18:24:14.910303543Z" level=info msg="RemoveContainer for \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\"" Apr 12 18:24:14.916087 env[1593]: time="2024-04-12T18:24:14.915940848Z" level=info msg="RemoveContainer for \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\" returns successfully" Apr 12 18:24:14.916572 kubelet[2764]: I0412 18:24:14.916511 2764 scope.go:117] "RemoveContainer" containerID="c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161" Apr 12 18:24:14.920092 env[1593]: time="2024-04-12T18:24:14.919952558Z" level=info msg="RemoveContainer for \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\"" Apr 12 18:24:14.926563 env[1593]: time="2024-04-12T18:24:14.926458972Z" level=info msg="RemoveContainer for \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\" returns successfully" Apr 12 18:24:14.927225 kubelet[2764]: I0412 18:24:14.927150 2764 scope.go:117] "RemoveContainer" containerID="844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3" Apr 12 18:24:14.931133 env[1593]: time="2024-04-12T18:24:14.930982626Z" level=info msg="RemoveContainer for \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\"" Apr 12 18:24:14.937644 env[1593]: time="2024-04-12T18:24:14.937540726Z" level=info msg="RemoveContainer for \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\" returns successfully" Apr 12 18:24:14.938155 kubelet[2764]: I0412 18:24:14.938115 2764 scope.go:117] "RemoveContainer" containerID="a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943" Apr 12 18:24:14.939361 env[1593]: time="2024-04-12T18:24:14.939217035Z" level=error msg="ContainerStatus for \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\": not found" Apr 12 18:24:14.939702 kubelet[2764]: E0412 18:24:14.939660 2764 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\": not found" containerID="a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943" Apr 12 18:24:14.939992 kubelet[2764]: I0412 18:24:14.939953 2764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943"} err="failed to get container status \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\": rpc error: code = NotFound desc = an error occurred when try to find container \"a720ae6deca8ee0b8dd10e29f4cc7a3402503c4f76aefdca9e64a083d2452943\": not found" Apr 12 18:24:14.940296 kubelet[2764]: I0412 18:24:14.940263 2764 scope.go:117] "RemoveContainer" containerID="60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9" Apr 12 18:24:14.940997 env[1593]: time="2024-04-12T18:24:14.940857931Z" level=error msg="ContainerStatus for \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\": not found" Apr 12 18:24:14.941459 kubelet[2764]: E0412 18:24:14.941388 2764 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\": not found" containerID="60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9" Apr 12 18:24:14.941603 kubelet[2764]: I0412 18:24:14.941511 2764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9"} err="failed to get container status \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"60e875b117d08e6b1b548cc49276a1fa635359b1fc990b449ce6cb8aa8d757e9\": not found" Apr 12 18:24:14.941603 kubelet[2764]: I0412 18:24:14.941572 2764 scope.go:117] "RemoveContainer" containerID="9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67" Apr 12 18:24:14.942301 env[1593]: time="2024-04-12T18:24:14.942143726Z" level=error msg="ContainerStatus for \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\": not found" Apr 12 18:24:14.942833 kubelet[2764]: E0412 18:24:14.942797 2764 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\": not found" containerID="9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67" Apr 12 18:24:14.943299 kubelet[2764]: I0412 18:24:14.943264 2764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67"} err="failed to get container status \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\": rpc error: code = NotFound desc = an error occurred when try to find container \"9eae8059c44a30680855acb4f14d0536180ee9d8e5cffa1ef61373df6b017f67\": not found" Apr 12 18:24:14.943497 kubelet[2764]: I0412 18:24:14.943469 2764 scope.go:117] "RemoveContainer" containerID="c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161" Apr 12 18:24:14.944300 env[1593]: time="2024-04-12T18:24:14.944102918Z" level=error msg="ContainerStatus for \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\": not found" Apr 12 18:24:14.944697 kubelet[2764]: E0412 18:24:14.944656 2764 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\": not found" containerID="c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161" Apr 12 18:24:14.944966 kubelet[2764]: I0412 18:24:14.944939 2764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161"} err="failed to get container status \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8f5615ab01ead167386650836204990a8e2ee68dd65bb24ea70cf352fff4161\": not found" Apr 12 18:24:14.945211 kubelet[2764]: I0412 18:24:14.945182 2764 scope.go:117] "RemoveContainer" containerID="844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3" Apr 12 18:24:14.946199 env[1593]: time="2024-04-12T18:24:14.946067810Z" level=error msg="ContainerStatus for \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\": not found" Apr 12 18:24:14.946669 kubelet[2764]: E0412 18:24:14.946625 2764 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\": not found" containerID="844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3" Apr 12 18:24:14.946915 kubelet[2764]: I0412 18:24:14.946887 2764 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3"} err="failed to get container status \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"844df83285abf1dfbc83cefb6acc84aebf675be63f4d219838d821164a14f6a3\": not found" Apr 12 18:24:15.615270 sshd[4326]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:15.621200 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:24:15.621608 systemd[1]: session-24.scope: Consumed 2.925s CPU time. Apr 12 18:24:15.622869 systemd-logind[1581]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:24:15.623380 systemd[1]: sshd@23-172.31.18.247:22-139.178.89.65:39282.service: Deactivated successfully. Apr 12 18:24:15.627180 systemd-logind[1581]: Removed session 24. Apr 12 18:24:15.646005 systemd[1]: Started sshd@24-172.31.18.247:22-139.178.89.65:39284.service. Apr 12 18:24:15.822455 sshd[4492]: Accepted publickey for core from 139.178.89.65 port 39284 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:24:15.826562 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:15.839662 systemd[1]: Started session-25.scope. Apr 12 18:24:15.843247 systemd-logind[1581]: New session 25 of user core. Apr 12 18:24:16.222113 kubelet[2764]: I0412 18:24:16.222056 2764 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9a609aad-a90d-41cc-81de-f04da6609c50" path="/var/lib/kubelet/pods/9a609aad-a90d-41cc-81de-f04da6609c50/volumes" Apr 12 18:24:16.225326 kubelet[2764]: I0412 18:24:16.225278 2764 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dce65d8d-6362-4b97-8575-6f19d7f56f90" path="/var/lib/kubelet/pods/dce65d8d-6362-4b97-8575-6f19d7f56f90/volumes" Apr 12 18:24:16.470135 kubelet[2764]: I0412 18:24:16.470075 2764 setters.go:568] "Node became not ready" node="ip-172-31-18-247" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-04-12T18:24:16Z","lastTransitionTime":"2024-04-12T18:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 12 18:24:18.718643 sshd[4492]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:18.726597 systemd[1]: sshd@24-172.31.18.247:22-139.178.89.65:39284.service: Deactivated successfully. Apr 12 18:24:18.728555 systemd-logind[1581]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:24:18.730169 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:24:18.730600 systemd[1]: session-25.scope: Consumed 2.622s CPU time. Apr 12 18:24:18.733022 systemd-logind[1581]: Removed session 25. Apr 12 18:24:18.743974 kubelet[2764]: I0412 18:24:18.743883 2764 topology_manager.go:215] "Topology Admit Handler" podUID="f9110a75-f0b3-4268-be0a-49adb929292d" podNamespace="kube-system" podName="cilium-7hgdn" Apr 12 18:24:18.744751 kubelet[2764]: E0412 18:24:18.744702 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a609aad-a90d-41cc-81de-f04da6609c50" containerName="cilium-agent" Apr 12 18:24:18.745020 kubelet[2764]: E0412 18:24:18.744985 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dce65d8d-6362-4b97-8575-6f19d7f56f90" containerName="cilium-operator" Apr 12 18:24:18.745292 kubelet[2764]: E0412 18:24:18.745254 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a609aad-a90d-41cc-81de-f04da6609c50" containerName="apply-sysctl-overwrites" Apr 12 18:24:18.745520 kubelet[2764]: E0412 18:24:18.745463 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a609aad-a90d-41cc-81de-f04da6609c50" containerName="mount-cgroup" Apr 12 18:24:18.745712 kubelet[2764]: E0412 18:24:18.745679 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a609aad-a90d-41cc-81de-f04da6609c50" containerName="mount-bpf-fs" Apr 12 18:24:18.745953 kubelet[2764]: E0412 18:24:18.745877 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a609aad-a90d-41cc-81de-f04da6609c50" containerName="clean-cilium-state" Apr 12 18:24:18.746323 kubelet[2764]: I0412 18:24:18.746274 2764 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce65d8d-6362-4b97-8575-6f19d7f56f90" containerName="cilium-operator" Apr 12 18:24:18.746608 kubelet[2764]: I0412 18:24:18.746555 2764 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a609aad-a90d-41cc-81de-f04da6609c50" containerName="cilium-agent" Apr 12 18:24:18.769416 systemd[1]: Started sshd@25-172.31.18.247:22-139.178.89.65:54960.service. Apr 12 18:24:18.787549 systemd[1]: Created slice kubepods-burstable-podf9110a75_f0b3_4268_be0a_49adb929292d.slice. Apr 12 18:24:18.916666 kubelet[2764]: I0412 18:24:18.916592 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-host-proc-sys-kernel\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917158 kubelet[2764]: I0412 18:24:18.917097 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-xtables-lock\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917311 kubelet[2764]: I0412 18:24:18.917185 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhmg\" (UniqueName: \"kubernetes.io/projected/f9110a75-f0b3-4268-be0a-49adb929292d-kube-api-access-kwhmg\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917311 kubelet[2764]: I0412 18:24:18.917237 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-bpf-maps\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917311 kubelet[2764]: I0412 18:24:18.917307 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-lib-modules\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917545 kubelet[2764]: I0412 18:24:18.917362 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9110a75-f0b3-4268-be0a-49adb929292d-clustermesh-secrets\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917545 kubelet[2764]: I0412 18:24:18.917418 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-cgroup\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917545 kubelet[2764]: I0412 18:24:18.917470 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-run\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917545 kubelet[2764]: I0412 18:24:18.917544 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-etc-cni-netd\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917909 kubelet[2764]: I0412 18:24:18.917596 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-ipsec-secrets\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917909 kubelet[2764]: I0412 18:24:18.917667 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cni-path\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917909 kubelet[2764]: I0412 18:24:18.917716 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9110a75-f0b3-4268-be0a-49adb929292d-hubble-tls\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917909 kubelet[2764]: I0412 18:24:18.917767 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-hostproc\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917909 kubelet[2764]: I0412 18:24:18.917812 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-config-path\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.917909 kubelet[2764]: I0412 18:24:18.917858 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-host-proc-sys-net\") pod \"cilium-7hgdn\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " pod="kube-system/cilium-7hgdn" Apr 12 18:24:18.962429 sshd[4502]: Accepted publickey for core from 139.178.89.65 port 54960 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:24:18.965308 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:18.976288 systemd-logind[1581]: New session 26 of user core. Apr 12 18:24:18.976625 systemd[1]: Started session-26.scope. Apr 12 18:24:19.096081 env[1593]: time="2024-04-12T18:24:19.095955152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hgdn,Uid:f9110a75-f0b3-4268-be0a-49adb929292d,Namespace:kube-system,Attempt:0,}" Apr 12 18:24:19.136870 env[1593]: time="2024-04-12T18:24:19.129924504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:24:19.136870 env[1593]: time="2024-04-12T18:24:19.130014014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:24:19.136870 env[1593]: time="2024-04-12T18:24:19.130268156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:24:19.136870 env[1593]: time="2024-04-12T18:24:19.130846415Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118 pid=4523 runtime=io.containerd.runc.v2 Apr 12 18:24:19.170113 systemd[1]: Started cri-containerd-6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118.scope. Apr 12 18:24:19.252728 env[1593]: time="2024-04-12T18:24:19.252548639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hgdn,Uid:f9110a75-f0b3-4268-be0a-49adb929292d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118\"" Apr 12 18:24:19.267821 env[1593]: time="2024-04-12T18:24:19.267732220Z" level=info msg="CreateContainer within sandbox \"6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:24:19.299098 env[1593]: time="2024-04-12T18:24:19.298970281Z" level=info msg="CreateContainer within sandbox \"6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a\"" Apr 12 18:24:19.301437 env[1593]: time="2024-04-12T18:24:19.301373832Z" level=info msg="StartContainer for \"4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a\"" Apr 12 18:24:19.348691 systemd[1]: Started cri-containerd-4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a.scope. Apr 12 18:24:19.393767 sshd[4502]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:19.401964 systemd[1]: sshd@25-172.31.18.247:22-139.178.89.65:54960.service: Deactivated successfully. Apr 12 18:24:19.404765 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:24:19.408586 systemd-logind[1581]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:24:19.413533 systemd-logind[1581]: Removed session 26. Apr 12 18:24:19.436443 systemd[1]: Started sshd@26-172.31.18.247:22-139.178.89.65:54970.service. Apr 12 18:24:19.457281 systemd[1]: cri-containerd-4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a.scope: Deactivated successfully. Apr 12 18:24:19.492596 env[1593]: time="2024-04-12T18:24:19.492371083Z" level=info msg="shim disconnected" id=4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a Apr 12 18:24:19.493284 env[1593]: time="2024-04-12T18:24:19.493229140Z" level=warning msg="cleaning up after shim disconnected" id=4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a namespace=k8s.io Apr 12 18:24:19.493592 env[1593]: time="2024-04-12T18:24:19.493526808Z" level=info msg="cleaning up dead shim" Apr 12 18:24:19.505096 kubelet[2764]: E0412 18:24:19.504098 2764 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:24:19.526144 env[1593]: time="2024-04-12T18:24:19.525952261Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4590 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:24:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Apr 12 18:24:19.528759 env[1593]: time="2024-04-12T18:24:19.528673317Z" level=error msg="Failed to pipe stdout of container \"4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a\"" error="reading from a closed fifo" Apr 12 18:24:19.529218 env[1593]: time="2024-04-12T18:24:19.528976181Z" level=error msg="Failed to pipe stderr of container \"4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a\"" error="reading from a closed fifo" Apr 12 18:24:19.529786 env[1593]: time="2024-04-12T18:24:19.527592906Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Apr 12 18:24:19.535890 env[1593]: time="2024-04-12T18:24:19.535765733Z" level=error msg="StartContainer for \"4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Apr 12 18:24:19.536817 kubelet[2764]: E0412 18:24:19.536718 2764 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a" Apr 12 18:24:19.537790 kubelet[2764]: E0412 18:24:19.537483 2764 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Apr 12 18:24:19.537790 kubelet[2764]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Apr 12 18:24:19.537790 kubelet[2764]: rm /hostbin/cilium-mount Apr 12 18:24:19.538376 kubelet[2764]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-kwhmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-7hgdn_kube-system(f9110a75-f0b3-4268-be0a-49adb929292d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Apr 12 18:24:19.538376 kubelet[2764]: E0412 18:24:19.537609 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7hgdn" podUID="f9110a75-f0b3-4268-be0a-49adb929292d" Apr 12 18:24:19.628822 sshd[4586]: Accepted publickey for core from 139.178.89.65 port 54970 ssh2: RSA SHA256:RkreMDY4vm0NHGuZyACqO02N2uRrqppl3JmzNQboRtE Apr 12 18:24:19.631880 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:24:19.644123 systemd-logind[1581]: New session 27 of user core. Apr 12 18:24:19.645429 systemd[1]: Started session-27.scope. Apr 12 18:24:19.860320 env[1593]: time="2024-04-12T18:24:19.860248901Z" level=info msg="StopPodSandbox for \"6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118\"" Apr 12 18:24:19.860578 env[1593]: time="2024-04-12T18:24:19.860363204Z" level=info msg="Container to stop \"4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:24:19.881549 systemd[1]: cri-containerd-6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118.scope: Deactivated successfully. Apr 12 18:24:19.977069 env[1593]: time="2024-04-12T18:24:19.976969814Z" level=info msg="shim disconnected" id=6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118 Apr 12 18:24:19.977426 env[1593]: time="2024-04-12T18:24:19.977076232Z" level=warning msg="cleaning up after shim disconnected" id=6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118 namespace=k8s.io Apr 12 18:24:19.977426 env[1593]: time="2024-04-12T18:24:19.977106893Z" level=info msg="cleaning up dead shim" Apr 12 18:24:19.997447 env[1593]: time="2024-04-12T18:24:19.997341580Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4626 runtime=io.containerd.runc.v2\n" Apr 12 18:24:19.998166 env[1593]: time="2024-04-12T18:24:19.998087878Z" level=info msg="TearDown network for sandbox \"6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118\" successfully" Apr 12 18:24:19.998166 env[1593]: time="2024-04-12T18:24:19.998160228Z" level=info msg="StopPodSandbox for \"6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118\" returns successfully" Apr 12 18:24:20.034003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6855606194ab161d5ff835cabc97a40e745a0801a4bc9b010a1bffa88475d118-shm.mount: Deactivated successfully. Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137253 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9110a75-f0b3-4268-be0a-49adb929292d-clustermesh-secrets\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137338 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-etc-cni-netd\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137387 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-lib-modules\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137433 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-cgroup\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137502 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-xtables-lock\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137572 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-hostproc\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137628 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-config-path\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137695 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-host-proc-sys-net\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137747 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-host-proc-sys-kernel\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137796 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwhmg\" (UniqueName: \"kubernetes.io/projected/f9110a75-f0b3-4268-be0a-49adb929292d-kube-api-access-kwhmg\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137841 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-run\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.137907 kubelet[2764]: I0412 18:24:20.137883 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cni-path\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.139408 kubelet[2764]: I0412 18:24:20.137931 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9110a75-f0b3-4268-be0a-49adb929292d-hubble-tls\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.139408 kubelet[2764]: I0412 18:24:20.137977 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-bpf-maps\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.139408 kubelet[2764]: I0412 18:24:20.138023 2764 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-ipsec-secrets\") pod \"f9110a75-f0b3-4268-be0a-49adb929292d\" (UID: \"f9110a75-f0b3-4268-be0a-49adb929292d\") " Apr 12 18:24:20.140618 kubelet[2764]: I0412 18:24:20.140530 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.141158 kubelet[2764]: I0412 18:24:20.141081 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.141586 kubelet[2764]: I0412 18:24:20.141503 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.142313 kubelet[2764]: I0412 18:24:20.141860 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.142571 kubelet[2764]: I0412 18:24:20.141903 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-hostproc" (OuterVolumeSpecName: "hostproc") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.142833 kubelet[2764]: I0412 18:24:20.142776 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.143164 kubelet[2764]: I0412 18:24:20.143124 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.143413 kubelet[2764]: I0412 18:24:20.143358 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.144425 kubelet[2764]: I0412 18:24:20.144357 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cni-path" (OuterVolumeSpecName: "cni-path") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.153082 kubelet[2764]: I0412 18:24:20.152584 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9110a75-f0b3-4268-be0a-49adb929292d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:20.153082 kubelet[2764]: I0412 18:24:20.152699 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:24:20.153549 systemd[1]: var-lib-kubelet-pods-f9110a75\x2df0b3\x2d4268\x2dbe0a\x2d49adb929292d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:24:20.156721 kubelet[2764]: I0412 18:24:20.156656 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:24:20.159468 systemd[1]: var-lib-kubelet-pods-f9110a75\x2df0b3\x2d4268\x2dbe0a\x2d49adb929292d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:24:20.164875 kubelet[2764]: I0412 18:24:20.164778 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:24:20.168683 kubelet[2764]: I0412 18:24:20.168621 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9110a75-f0b3-4268-be0a-49adb929292d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:24:20.172508 systemd[1]: var-lib-kubelet-pods-f9110a75\x2df0b3\x2d4268\x2dbe0a\x2d49adb929292d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkwhmg.mount: Deactivated successfully. Apr 12 18:24:20.175903 kubelet[2764]: I0412 18:24:20.174523 2764 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9110a75-f0b3-4268-be0a-49adb929292d-kube-api-access-kwhmg" (OuterVolumeSpecName: "kube-api-access-kwhmg") pod "f9110a75-f0b3-4268-be0a-49adb929292d" (UID: "f9110a75-f0b3-4268-be0a-49adb929292d"). InnerVolumeSpecName "kube-api-access-kwhmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:24:20.172738 systemd[1]: var-lib-kubelet-pods-f9110a75\x2df0b3\x2d4268\x2dbe0a\x2d49adb929292d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:24:20.236066 systemd[1]: Removed slice kubepods-burstable-podf9110a75_f0b3_4268_be0a_49adb929292d.slice. Apr 12 18:24:20.242611 kubelet[2764]: I0412 18:24:20.242518 2764 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9110a75-f0b3-4268-be0a-49adb929292d-hubble-tls\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.242611 kubelet[2764]: I0412 18:24:20.242609 2764 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-bpf-maps\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.242912 kubelet[2764]: I0412 18:24:20.242650 2764 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-ipsec-secrets\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.242912 kubelet[2764]: I0412 18:24:20.242681 2764 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cni-path\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.242912 kubelet[2764]: I0412 18:24:20.242710 2764 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9110a75-f0b3-4268-be0a-49adb929292d-clustermesh-secrets\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.242912 kubelet[2764]: I0412 18:24:20.242748 2764 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-etc-cni-netd\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.242912 kubelet[2764]: I0412 18:24:20.242778 2764 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-lib-modules\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.242912 kubelet[2764]: I0412 18:24:20.242807 2764 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-xtables-lock\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.242912 kubelet[2764]: I0412 18:24:20.242841 2764 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-cgroup\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.244015 kubelet[2764]: I0412 18:24:20.242918 2764 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-config-path\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.244015 kubelet[2764]: I0412 18:24:20.242958 2764 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-host-proc-sys-net\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.244015 kubelet[2764]: I0412 18:24:20.243140 2764 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-hostproc\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.244015 kubelet[2764]: I0412 18:24:20.243205 2764 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-host-proc-sys-kernel\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.244015 kubelet[2764]: I0412 18:24:20.243236 2764 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kwhmg\" (UniqueName: \"kubernetes.io/projected/f9110a75-f0b3-4268-be0a-49adb929292d-kube-api-access-kwhmg\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.244015 kubelet[2764]: I0412 18:24:20.243264 2764 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9110a75-f0b3-4268-be0a-49adb929292d-cilium-run\") on node \"ip-172-31-18-247\" DevicePath \"\"" Apr 12 18:24:20.866423 kubelet[2764]: I0412 18:24:20.866377 2764 scope.go:117] "RemoveContainer" containerID="4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a" Apr 12 18:24:20.872047 env[1593]: time="2024-04-12T18:24:20.871148077Z" level=info msg="RemoveContainer for \"4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a\"" Apr 12 18:24:20.881296 env[1593]: time="2024-04-12T18:24:20.879547530Z" level=info msg="RemoveContainer for \"4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a\" returns successfully" Apr 12 18:24:20.945564 kubelet[2764]: I0412 18:24:20.945431 2764 topology_manager.go:215] "Topology Admit Handler" podUID="605e5561-c99a-4344-9dab-8a0c70f81f22" podNamespace="kube-system" podName="cilium-rb96d" Apr 12 18:24:20.945742 kubelet[2764]: E0412 18:24:20.945619 2764 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f9110a75-f0b3-4268-be0a-49adb929292d" containerName="mount-cgroup" Apr 12 18:24:20.945742 kubelet[2764]: I0412 18:24:20.945706 2764 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9110a75-f0b3-4268-be0a-49adb929292d" containerName="mount-cgroup" Apr 12 18:24:20.959435 systemd[1]: Created slice kubepods-burstable-pod605e5561_c99a_4344_9dab_8a0c70f81f22.slice. Apr 12 18:24:21.047914 kubelet[2764]: I0412 18:24:21.047862 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-lib-modules\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.048348 kubelet[2764]: I0412 18:24:21.048292 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-cni-path\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.048633 kubelet[2764]: I0412 18:24:21.048600 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/605e5561-c99a-4344-9dab-8a0c70f81f22-hubble-tls\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.048869 kubelet[2764]: I0412 18:24:21.048841 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-hostproc\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.049095 kubelet[2764]: I0412 18:24:21.049064 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/605e5561-c99a-4344-9dab-8a0c70f81f22-cilium-config-path\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.049351 kubelet[2764]: I0412 18:24:21.049319 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-cilium-run\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.049608 kubelet[2764]: I0412 18:24:21.049575 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-etc-cni-netd\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.049844 kubelet[2764]: I0412 18:24:21.049812 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/605e5561-c99a-4344-9dab-8a0c70f81f22-cilium-ipsec-secrets\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.050103 kubelet[2764]: I0412 18:24:21.050060 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-host-proc-sys-kernel\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.050369 kubelet[2764]: I0412 18:24:21.050337 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp9jj\" (UniqueName: \"kubernetes.io/projected/605e5561-c99a-4344-9dab-8a0c70f81f22-kube-api-access-fp9jj\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.050615 kubelet[2764]: I0412 18:24:21.050585 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-bpf-maps\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.050837 kubelet[2764]: I0412 18:24:21.050808 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-cilium-cgroup\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.051105 kubelet[2764]: I0412 18:24:21.051072 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-xtables-lock\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.051346 kubelet[2764]: I0412 18:24:21.051314 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/605e5561-c99a-4344-9dab-8a0c70f81f22-clustermesh-secrets\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.051605 kubelet[2764]: I0412 18:24:21.051574 2764 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/605e5561-c99a-4344-9dab-8a0c70f81f22-host-proc-sys-net\") pod \"cilium-rb96d\" (UID: \"605e5561-c99a-4344-9dab-8a0c70f81f22\") " pod="kube-system/cilium-rb96d" Apr 12 18:24:21.266460 env[1593]: time="2024-04-12T18:24:21.266129431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb96d,Uid:605e5561-c99a-4344-9dab-8a0c70f81f22,Namespace:kube-system,Attempt:0,}" Apr 12 18:24:21.306665 env[1593]: time="2024-04-12T18:24:21.306495777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:24:21.306997 env[1593]: time="2024-04-12T18:24:21.306581483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:24:21.307293 env[1593]: time="2024-04-12T18:24:21.307194855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:24:21.307952 env[1593]: time="2024-04-12T18:24:21.307818210Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104 pid=4656 runtime=io.containerd.runc.v2 Apr 12 18:24:21.343086 systemd[1]: Started cri-containerd-d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104.scope. Apr 12 18:24:21.410833 env[1593]: time="2024-04-12T18:24:21.410760975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb96d,Uid:605e5561-c99a-4344-9dab-8a0c70f81f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\"" Apr 12 18:24:21.419396 env[1593]: time="2024-04-12T18:24:21.419320657Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:24:21.443738 env[1593]: time="2024-04-12T18:24:21.443625093Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39\"" Apr 12 18:24:21.446961 env[1593]: time="2024-04-12T18:24:21.445117295Z" level=info msg="StartContainer for \"87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39\"" Apr 12 18:24:21.502248 systemd[1]: Started cri-containerd-87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39.scope. Apr 12 18:24:21.617847 env[1593]: time="2024-04-12T18:24:21.617770140Z" level=info msg="StartContainer for \"87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39\" returns successfully" Apr 12 18:24:21.644197 systemd[1]: cri-containerd-87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39.scope: Deactivated successfully. Apr 12 18:24:21.710141 env[1593]: time="2024-04-12T18:24:21.710066838Z" level=info msg="shim disconnected" id=87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39 Apr 12 18:24:21.710676 env[1593]: time="2024-04-12T18:24:21.710616799Z" level=warning msg="cleaning up after shim disconnected" id=87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39 namespace=k8s.io Apr 12 18:24:21.710943 env[1593]: time="2024-04-12T18:24:21.710901338Z" level=info msg="cleaning up dead shim" Apr 12 18:24:21.737442 env[1593]: time="2024-04-12T18:24:21.737363285Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4739 runtime=io.containerd.runc.v2\n" Apr 12 18:24:21.882601 env[1593]: time="2024-04-12T18:24:21.882379534Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:24:21.907232 env[1593]: time="2024-04-12T18:24:21.907132542Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70\"" Apr 12 18:24:21.908394 env[1593]: time="2024-04-12T18:24:21.908309663Z" level=info msg="StartContainer for \"0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70\"" Apr 12 18:24:21.975533 systemd[1]: Started cri-containerd-0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70.scope. Apr 12 18:24:22.114546 env[1593]: time="2024-04-12T18:24:22.114464188Z" level=info msg="StartContainer for \"0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70\" returns successfully" Apr 12 18:24:22.150702 systemd[1]: cri-containerd-0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70.scope: Deactivated successfully. Apr 12 18:24:22.204667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70-rootfs.mount: Deactivated successfully. Apr 12 18:24:22.218522 kubelet[2764]: E0412 18:24:22.216610 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-8qhj6" podUID="8bddbc8f-8806-46cf-ac17-6ce9152b7449" Apr 12 18:24:22.225577 kubelet[2764]: I0412 18:24:22.225495 2764 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f9110a75-f0b3-4268-be0a-49adb929292d" path="/var/lib/kubelet/pods/f9110a75-f0b3-4268-be0a-49adb929292d/volumes" Apr 12 18:24:22.227649 env[1593]: time="2024-04-12T18:24:22.227571492Z" level=info msg="shim disconnected" id=0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70 Apr 12 18:24:22.227649 env[1593]: time="2024-04-12T18:24:22.227650730Z" level=warning msg="cleaning up after shim disconnected" id=0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70 namespace=k8s.io Apr 12 18:24:22.227985 env[1593]: time="2024-04-12T18:24:22.227674959Z" level=info msg="cleaning up dead shim" Apr 12 18:24:22.243283 env[1593]: time="2024-04-12T18:24:22.243190533Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4801 runtime=io.containerd.runc.v2\n" Apr 12 18:24:22.598275 kubelet[2764]: W0412 18:24:22.598188 2764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9110a75_f0b3_4268_be0a_49adb929292d.slice/cri-containerd-4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a.scope WatchSource:0}: container "4541549b82cae079712027fdc8c034a95dce3feae38f6acb0edaef0158c8679a" in namespace "k8s.io": not found Apr 12 18:24:22.898410 env[1593]: time="2024-04-12T18:24:22.898201217Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:24:22.933336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3918432189.mount: Deactivated successfully. Apr 12 18:24:22.944076 env[1593]: time="2024-04-12T18:24:22.943929985Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7\"" Apr 12 18:24:22.945724 env[1593]: time="2024-04-12T18:24:22.945629252Z" level=info msg="StartContainer for \"76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7\"" Apr 12 18:24:22.987593 systemd[1]: Started cri-containerd-76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7.scope. Apr 12 18:24:23.066478 env[1593]: time="2024-04-12T18:24:23.066396894Z" level=info msg="StartContainer for \"76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7\" returns successfully" Apr 12 18:24:23.068531 systemd[1]: cri-containerd-76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7.scope: Deactivated successfully. Apr 12 18:24:23.122311 env[1593]: time="2024-04-12T18:24:23.122220992Z" level=info msg="shim disconnected" id=76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7 Apr 12 18:24:23.122656 env[1593]: time="2024-04-12T18:24:23.122314007Z" level=warning msg="cleaning up after shim disconnected" id=76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7 namespace=k8s.io Apr 12 18:24:23.122656 env[1593]: time="2024-04-12T18:24:23.122338763Z" level=info msg="cleaning up dead shim" Apr 12 18:24:23.138778 env[1593]: time="2024-04-12T18:24:23.138703096Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4857 runtime=io.containerd.runc.v2\n" Apr 12 18:24:23.160965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7-rootfs.mount: Deactivated successfully. Apr 12 18:24:23.904180 env[1593]: time="2024-04-12T18:24:23.904105274Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:24:23.937785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount96720116.mount: Deactivated successfully. Apr 12 18:24:23.943445 env[1593]: time="2024-04-12T18:24:23.943342375Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59\"" Apr 12 18:24:23.944907 env[1593]: time="2024-04-12T18:24:23.944843997Z" level=info msg="StartContainer for \"ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59\"" Apr 12 18:24:24.004348 systemd[1]: Started cri-containerd-ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59.scope. Apr 12 18:24:24.079288 systemd[1]: cri-containerd-ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59.scope: Deactivated successfully. Apr 12 18:24:24.084559 env[1593]: time="2024-04-12T18:24:24.083522821Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod605e5561_c99a_4344_9dab_8a0c70f81f22.slice/cri-containerd-ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59.scope/memory.events\": no such file or directory" Apr 12 18:24:24.087441 env[1593]: time="2024-04-12T18:24:24.087357410Z" level=info msg="StartContainer for \"ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59\" returns successfully" Apr 12 18:24:24.141445 env[1593]: time="2024-04-12T18:24:24.141375311Z" level=info msg="shim disconnected" id=ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59 Apr 12 18:24:24.142216 env[1593]: time="2024-04-12T18:24:24.142157611Z" level=warning msg="cleaning up after shim disconnected" id=ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59 namespace=k8s.io Apr 12 18:24:24.142466 env[1593]: time="2024-04-12T18:24:24.142422098Z" level=info msg="cleaning up dead shim" Apr 12 18:24:24.161253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59-rootfs.mount: Deactivated successfully. Apr 12 18:24:24.163679 env[1593]: time="2024-04-12T18:24:24.163616805Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4914 runtime=io.containerd.runc.v2\n" Apr 12 18:24:24.220763 kubelet[2764]: E0412 18:24:24.220705 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-8qhj6" podUID="8bddbc8f-8806-46cf-ac17-6ce9152b7449" Apr 12 18:24:24.506016 kubelet[2764]: E0412 18:24:24.505866 2764 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:24:24.929974 env[1593]: time="2024-04-12T18:24:24.929595815Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:24:24.974213 env[1593]: time="2024-04-12T18:24:24.974113964Z" level=info msg="CreateContainer within sandbox \"d90a977da1c0d670dce52938f266b085a4e5d52acf9e7b0576faa290d87e2104\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63\"" Apr 12 18:24:24.978165 env[1593]: time="2024-04-12T18:24:24.975569133Z" level=info msg="StartContainer for \"0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63\"" Apr 12 18:24:25.031358 systemd[1]: Started cri-containerd-0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63.scope. Apr 12 18:24:25.109858 env[1593]: time="2024-04-12T18:24:25.109781073Z" level=info msg="StartContainer for \"0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63\" returns successfully" Apr 12 18:24:25.715440 kubelet[2764]: W0412 18:24:25.715381 2764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod605e5561_c99a_4344_9dab_8a0c70f81f22.slice/cri-containerd-87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39.scope WatchSource:0}: task 87d4141ab7a755252080fdb4f4e2d967f2fb7991e66ea670239d7cce05f9db39 not found: not found Apr 12 18:24:25.957769 kubelet[2764]: I0412 18:24:25.957685 2764 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rb96d" podStartSLOduration=5.957596144 podStartE2EDuration="5.957596144s" podCreationTimestamp="2024-04-12 18:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:24:25.957107108 +0000 UTC m=+132.124352791" watchObservedRunningTime="2024-04-12 18:24:25.957596144 +0000 UTC m=+132.124841743" Apr 12 18:24:25.970332 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Apr 12 18:24:26.217387 kubelet[2764]: E0412 18:24:26.217337 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-8qhj6" podUID="8bddbc8f-8806-46cf-ac17-6ce9152b7449" Apr 12 18:24:26.395669 systemd[1]: run-containerd-runc-k8s.io-0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63-runc.39f04y.mount: Deactivated successfully. Apr 12 18:24:28.217391 kubelet[2764]: E0412 18:24:28.217332 2764 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-8qhj6" podUID="8bddbc8f-8806-46cf-ac17-6ce9152b7449" Apr 12 18:24:28.706724 systemd[1]: run-containerd-runc-k8s.io-0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63-runc.iUFk9A.mount: Deactivated successfully. Apr 12 18:24:28.834195 kubelet[2764]: W0412 18:24:28.833872 2764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod605e5561_c99a_4344_9dab_8a0c70f81f22.slice/cri-containerd-0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70.scope WatchSource:0}: task 0c1cf215d17f29a2525a528ebcc8893c7286f7af1921a0961be1a36570a39b70 not found: not found Apr 12 18:24:30.543884 (udev-worker)[5491]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:24:30.550000 (udev-worker)[5492]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:24:30.555359 systemd-networkd[1403]: lxc_health: Link UP Apr 12 18:24:30.595138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:24:30.597499 systemd-networkd[1403]: lxc_health: Gained carrier Apr 12 18:24:31.857386 systemd-networkd[1403]: lxc_health: Gained IPv6LL Apr 12 18:24:31.950760 kubelet[2764]: W0412 18:24:31.950677 2764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod605e5561_c99a_4344_9dab_8a0c70f81f22.slice/cri-containerd-76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7.scope WatchSource:0}: task 76f3ea68e61d0cabaeead50cf2c6e87221f5044b7d5079151ad0f20a500c0cb7 not found: not found Apr 12 18:24:33.495605 systemd[1]: run-containerd-runc-k8s.io-0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63-runc.xXs6x3.mount: Deactivated successfully. Apr 12 18:24:35.074958 kubelet[2764]: W0412 18:24:35.073850 2764 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod605e5561_c99a_4344_9dab_8a0c70f81f22.slice/cri-containerd-ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59.scope WatchSource:0}: task ee3b3ef5d5cb40267ef08557bf89a0d8c173e46098b4f12476030ea8d0d7de59 not found: not found Apr 12 18:24:35.834169 systemd[1]: run-containerd-runc-k8s.io-0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63-runc.PjzN6t.mount: Deactivated successfully. Apr 12 18:24:38.159366 systemd[1]: run-containerd-runc-k8s.io-0f00a1729b1a9755fc3af5490a96bc9f2961efbbe48c5c49964fb527e2e54c63-runc.mK2F2P.mount: Deactivated successfully. Apr 12 18:24:38.373536 sshd[4586]: pam_unix(sshd:session): session closed for user core Apr 12 18:24:38.379457 systemd[1]: sshd@26-172.31.18.247:22-139.178.89.65:54970.service: Deactivated successfully. Apr 12 18:24:38.380992 systemd[1]: session-27.scope: Deactivated successfully. Apr 12 18:24:38.383289 systemd-logind[1581]: Session 27 logged out. Waiting for processes to exit. Apr 12 18:24:38.387521 systemd-logind[1581]: Removed session 27. Apr 12 18:24:52.903930 systemd[1]: cri-containerd-eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f.scope: Deactivated successfully. Apr 12 18:24:52.904724 systemd[1]: cri-containerd-eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f.scope: Consumed 6.410s CPU time. Apr 12 18:24:52.951595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f-rootfs.mount: Deactivated successfully. Apr 12 18:24:52.972007 env[1593]: time="2024-04-12T18:24:52.971929771Z" level=info msg="shim disconnected" id=eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f Apr 12 18:24:52.973001 env[1593]: time="2024-04-12T18:24:52.972941086Z" level=warning msg="cleaning up after shim disconnected" id=eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f namespace=k8s.io Apr 12 18:24:52.973300 env[1593]: time="2024-04-12T18:24:52.973255507Z" level=info msg="cleaning up dead shim" Apr 12 18:24:52.991117 env[1593]: time="2024-04-12T18:24:52.991018873Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5625 runtime=io.containerd.runc.v2\n" Apr 12 18:24:53.013058 kubelet[2764]: I0412 18:24:53.011554 2764 scope.go:117] "RemoveContainer" containerID="eab6a3d8a127075545956f81503e455b63845783a519df5a33bbcaac6a40ec5f" Apr 12 18:24:53.018194 env[1593]: time="2024-04-12T18:24:53.018110725Z" level=info msg="CreateContainer within sandbox \"79991a12abe18d0a9687b2d3f0c6edae54ab7fee93341f9a96c2d4fa17803b46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 12 18:24:53.042288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212544009.mount: Deactivated successfully. Apr 12 18:24:53.057764 env[1593]: time="2024-04-12T18:24:53.057662535Z" level=info msg="CreateContainer within sandbox \"79991a12abe18d0a9687b2d3f0c6edae54ab7fee93341f9a96c2d4fa17803b46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"79602848d57499f4e577a35d87c17046399de4575cadadb4e84d264183854f32\"" Apr 12 18:24:53.058793 env[1593]: time="2024-04-12T18:24:53.058668678Z" level=info msg="StartContainer for \"79602848d57499f4e577a35d87c17046399de4575cadadb4e84d264183854f32\"" Apr 12 18:24:53.114431 systemd[1]: Started cri-containerd-79602848d57499f4e577a35d87c17046399de4575cadadb4e84d264183854f32.scope. Apr 12 18:24:53.205757 env[1593]: time="2024-04-12T18:24:53.205518934Z" level=info msg="StartContainer for \"79602848d57499f4e577a35d87c17046399de4575cadadb4e84d264183854f32\" returns successfully" Apr 12 18:24:57.467238 systemd[1]: cri-containerd-ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f.scope: Deactivated successfully. Apr 12 18:24:57.467931 systemd[1]: cri-containerd-ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f.scope: Consumed 5.737s CPU time. Apr 12 18:24:57.484185 kubelet[2764]: E0412 18:24:57.484097 2764 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-247?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 12 18:24:57.521625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f-rootfs.mount: Deactivated successfully. Apr 12 18:24:57.540196 env[1593]: time="2024-04-12T18:24:57.539995656Z" level=info msg="shim disconnected" id=ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f Apr 12 18:24:57.541150 env[1593]: time="2024-04-12T18:24:57.541008687Z" level=warning msg="cleaning up after shim disconnected" id=ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f namespace=k8s.io Apr 12 18:24:57.541498 env[1593]: time="2024-04-12T18:24:57.541452159Z" level=info msg="cleaning up dead shim" Apr 12 18:24:57.559567 env[1593]: time="2024-04-12T18:24:57.559500428Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:24:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5683 runtime=io.containerd.runc.v2\n" Apr 12 18:24:58.035125 kubelet[2764]: I0412 18:24:58.035016 2764 scope.go:117] "RemoveContainer" containerID="ab9385916a0aaf13fd47571930de58f1e1626442e71e3bff8e5bbc9f4a826a3f" Apr 12 18:24:58.040697 env[1593]: time="2024-04-12T18:24:58.040616092Z" level=info msg="CreateContainer within sandbox \"b0e231644c5f4c93aff56c116d17c94cbb83e1d55c99bc5c5cc1119a3b95af6a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 12 18:24:58.067517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176568409.mount: Deactivated successfully. Apr 12 18:24:58.086887 env[1593]: time="2024-04-12T18:24:58.086739753Z" level=info msg="CreateContainer within sandbox \"b0e231644c5f4c93aff56c116d17c94cbb83e1d55c99bc5c5cc1119a3b95af6a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c07b888851984695cb99e7a96d80657690e0195c70df8c48a2409dcdb9913569\"" Apr 12 18:24:58.088221 env[1593]: time="2024-04-12T18:24:58.088128406Z" level=info msg="StartContainer for \"c07b888851984695cb99e7a96d80657690e0195c70df8c48a2409dcdb9913569\"" Apr 12 18:24:58.130342 systemd[1]: Started cri-containerd-c07b888851984695cb99e7a96d80657690e0195c70df8c48a2409dcdb9913569.scope. Apr 12 18:24:58.229342 env[1593]: time="2024-04-12T18:24:58.229268125Z" level=info msg="StartContainer for \"c07b888851984695cb99e7a96d80657690e0195c70df8c48a2409dcdb9913569\" returns successfully"