Oct 2 19:26:26.184382 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 2 19:26:26.184420 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:26:26.184442 kernel: efi: EFI v2.70 by EDK II Oct 2 19:26:26.184457 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71accf98 Oct 2 19:26:26.184471 kernel: ACPI: Early table checksum verification disabled Oct 2 19:26:26.184485 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 2 19:26:26.184500 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 2 19:26:26.184515 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:26:26.184529 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 2 19:26:26.184542 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:26:26.189372 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 2 19:26:26.189390 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 2 19:26:26.189404 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 2 19:26:26.189419 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:26:26.189436 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 2 19:26:26.189458 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 2 19:26:26.189473 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 2 19:26:26.189487 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 2 19:26:26.189502 kernel: printk: bootconsole [uart0] enabled Oct 2 19:26:26.189516 kernel: NUMA: Failed to initialise from firmware Oct 2 19:26:26.189532 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:26:26.189547 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Oct 2 19:26:26.189595 kernel: Zone ranges: Oct 2 19:26:26.189611 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 2 19:26:26.189626 kernel: DMA32 empty Oct 2 19:26:26.189640 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 2 19:26:26.189660 kernel: Movable zone start for each node Oct 2 19:26:26.189675 kernel: Early memory node ranges Oct 2 19:26:26.189690 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Oct 2 19:26:26.189704 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 2 19:26:26.189718 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 2 19:26:26.189733 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 2 19:26:26.189747 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 2 19:26:26.189762 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 2 19:26:26.189776 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 2 19:26:26.189790 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 2 19:26:26.189805 kernel: psci: probing for conduit method from ACPI. Oct 2 19:26:26.189819 kernel: psci: PSCIv1.0 detected in firmware. Oct 2 19:26:26.189838 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:26:26.189852 kernel: psci: Trusted OS migration not required Oct 2 19:26:26.189874 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:26:26.189889 kernel: ACPI: SRAT not present Oct 2 19:26:26.189905 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:26:26.189924 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:26:26.189940 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 19:26:26.189955 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:26:26.189970 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:26:26.189985 kernel: CPU features: detected: Spectre-v2 Oct 2 19:26:26.190000 kernel: CPU features: detected: Spectre-v3a Oct 2 19:26:26.190015 kernel: CPU features: detected: Spectre-BHB Oct 2 19:26:26.190030 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:26:26.190045 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:26:26.190060 kernel: CPU features: detected: ARM erratum 1742098 Oct 2 19:26:26.190075 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 2 19:26:26.190094 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 2 19:26:26.190110 kernel: Policy zone: Normal Oct 2 19:26:26.190128 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:26:26.190144 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:26:26.190159 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:26:26.190175 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:26:26.190190 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:26:26.190205 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 2 19:26:26.190221 kernel: Memory: 3826444K/4030464K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 204020K reserved, 0K cma-reserved) Oct 2 19:26:26.190237 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:26:26.190255 kernel: trace event string verifier disabled Oct 2 19:26:26.190270 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:26:26.190286 kernel: rcu: RCU event tracing is enabled. Oct 2 19:26:26.190302 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:26:26.190317 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:26:26.190333 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:26:26.190348 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:26:26.190364 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:26:26.190379 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:26:26.190394 kernel: GICv3: 96 SPIs implemented Oct 2 19:26:26.190409 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:26:26.190423 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:26:26.190442 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:26:26.190457 kernel: GICv3: 16 PPIs implemented Oct 2 19:26:26.190472 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 2 19:26:26.190487 kernel: ACPI: SRAT not present Oct 2 19:26:26.190502 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 2 19:26:26.190517 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:26:26.190532 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:26:26.190548 kernel: GICv3: using LPI property table @0x00000004000c0000 Oct 2 19:26:26.190585 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 2 19:26:26.190601 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Oct 2 19:26:26.190616 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 2 19:26:26.190637 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 2 19:26:26.190653 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 2 19:26:26.190668 kernel: Console: colour dummy device 80x25 Oct 2 19:26:26.190684 kernel: printk: console [tty1] enabled Oct 2 19:26:26.190699 kernel: ACPI: Core revision 20210730 Oct 2 19:26:26.190715 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 2 19:26:26.190731 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:26:26.190747 kernel: LSM: Security Framework initializing Oct 2 19:26:26.190762 kernel: SELinux: Initializing. Oct 2 19:26:26.190778 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:26:26.190800 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:26:26.190815 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:26:26.190830 kernel: Platform MSI: ITS@0x10080000 domain created Oct 2 19:26:26.190846 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 2 19:26:26.190861 kernel: Remapping and enabling EFI services. Oct 2 19:26:26.190877 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:26:26.190893 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:26:26.190908 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 2 19:26:26.190924 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Oct 2 19:26:26.190945 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 2 19:26:26.190961 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:26:26.190976 kernel: SMP: Total of 2 processors activated. Oct 2 19:26:26.190991 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:26:26.191007 kernel: CPU features: detected: 32-bit EL1 Support Oct 2 19:26:26.191022 kernel: CPU features: detected: CRC32 instructions Oct 2 19:26:26.191037 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:26:26.191052 kernel: alternatives: patching kernel code Oct 2 19:26:26.191068 kernel: devtmpfs: initialized Oct 2 19:26:26.191087 kernel: KASLR disabled due to lack of seed Oct 2 19:26:26.191103 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:26:26.191118 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:26:26.191145 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:26:26.191164 kernel: SMBIOS 3.0.0 present. Oct 2 19:26:26.191180 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 2 19:26:26.191196 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:26:26.191213 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:26:26.191229 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:26:26.191245 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:26:26.191261 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:26:26.191277 kernel: audit: type=2000 audit(0.250:1): state=initialized audit_enabled=0 res=1 Oct 2 19:26:26.191297 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:26:26.191313 kernel: cpuidle: using governor menu Oct 2 19:26:26.191329 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:26:26.191345 kernel: ASID allocator initialised with 32768 entries Oct 2 19:26:26.191361 kernel: ACPI: bus type PCI registered Oct 2 19:26:26.191381 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:26:26.191416 kernel: Serial: AMBA PL011 UART driver Oct 2 19:26:26.191436 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:26:26.191452 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:26:26.191469 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:26:26.191485 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:26:26.191501 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:26:26.191517 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:26:26.191533 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:26:26.191580 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:26:26.191601 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:26:26.191617 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:26:26.191634 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:26:26.191650 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:26:26.191666 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:26:26.191683 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:26:26.191700 kernel: ACPI: Interpreter enabled Oct 2 19:26:26.191716 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:26:26.191740 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:26:26.191758 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 2 19:26:26.192179 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:26:26.192379 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:26:26.192603 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:26:26.192801 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 2 19:26:26.192993 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 2 19:26:26.193022 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 2 19:26:26.193039 kernel: acpiphp: Slot [1] registered Oct 2 19:26:26.193055 kernel: acpiphp: Slot [2] registered Oct 2 19:26:26.193071 kernel: acpiphp: Slot [3] registered Oct 2 19:26:26.193087 kernel: acpiphp: Slot [4] registered Oct 2 19:26:26.193103 kernel: acpiphp: Slot [5] registered Oct 2 19:26:26.193119 kernel: acpiphp: Slot [6] registered Oct 2 19:26:26.193135 kernel: acpiphp: Slot [7] registered Oct 2 19:26:26.193151 kernel: acpiphp: Slot [8] registered Oct 2 19:26:26.193171 kernel: acpiphp: Slot [9] registered Oct 2 19:26:26.193187 kernel: acpiphp: Slot [10] registered Oct 2 19:26:26.193203 kernel: acpiphp: Slot [11] registered Oct 2 19:26:26.193219 kernel: acpiphp: Slot [12] registered Oct 2 19:26:26.193235 kernel: acpiphp: Slot [13] registered Oct 2 19:26:26.193251 kernel: acpiphp: Slot [14] registered Oct 2 19:26:26.193267 kernel: acpiphp: Slot [15] registered Oct 2 19:26:26.193283 kernel: acpiphp: Slot [16] registered Oct 2 19:26:26.193299 kernel: acpiphp: Slot [17] registered Oct 2 19:26:26.193315 kernel: acpiphp: Slot [18] registered Oct 2 19:26:26.193336 kernel: acpiphp: Slot [19] registered Oct 2 19:26:26.193353 kernel: acpiphp: Slot [20] registered Oct 2 19:26:26.193369 kernel: acpiphp: Slot [21] registered Oct 2 19:26:26.193385 kernel: acpiphp: Slot [22] registered Oct 2 19:26:26.193400 kernel: acpiphp: Slot [23] registered Oct 2 19:26:26.193416 kernel: acpiphp: Slot [24] registered Oct 2 19:26:26.193432 kernel: acpiphp: Slot [25] registered Oct 2 19:26:26.193448 kernel: acpiphp: Slot [26] registered Oct 2 19:26:26.193464 kernel: acpiphp: Slot [27] registered Oct 2 19:26:26.193484 kernel: acpiphp: Slot [28] registered Oct 2 19:26:26.193500 kernel: acpiphp: Slot [29] registered Oct 2 19:26:26.193516 kernel: acpiphp: Slot [30] registered Oct 2 19:26:26.193532 kernel: acpiphp: Slot [31] registered Oct 2 19:26:26.193548 kernel: PCI host bridge to bus 0000:00 Oct 2 19:26:26.201619 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 2 19:26:26.201875 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:26:26.202056 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 2 19:26:26.202242 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 2 19:26:26.202468 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 2 19:26:26.202722 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 2 19:26:26.202925 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 2 19:26:26.203135 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:26:26.203338 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 2 19:26:26.203592 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:26:26.203814 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:26:26.204016 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 2 19:26:26.204211 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 2 19:26:26.204412 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 2 19:26:26.204645 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 2 19:26:26.204850 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 2 19:26:26.205065 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 2 19:26:26.205276 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 2 19:26:26.205482 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 2 19:26:26.205737 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 2 19:26:26.206824 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 2 19:26:26.207070 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:26:26.207278 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 2 19:26:26.207309 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:26:26.207327 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:26:26.207343 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:26:26.207360 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:26:26.207376 kernel: iommu: Default domain type: Translated Oct 2 19:26:26.207413 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:26:26.207434 kernel: vgaarb: loaded Oct 2 19:26:26.207451 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:26:26.207468 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:26:26.207489 kernel: PTP clock support registered Oct 2 19:26:26.207506 kernel: Registered efivars operations Oct 2 19:26:26.207522 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:26:26.207538 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:26:26.207588 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:26:26.207607 kernel: pnp: PnP ACPI init Oct 2 19:26:26.214206 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 2 19:26:26.214249 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:26:26.214267 kernel: NET: Registered PF_INET protocol family Oct 2 19:26:26.214291 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:26:26.214308 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:26:26.214325 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:26:26.214341 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:26:26.214358 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:26:26.214374 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:26:26.214391 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:26:26.214407 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:26:26.214424 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:26:26.214444 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:26:26.214460 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 2 19:26:26.214477 kernel: kvm [1]: HYP mode not available Oct 2 19:26:26.214495 kernel: Initialise system trusted keyrings Oct 2 19:26:26.214513 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:26:26.214531 kernel: Key type asymmetric registered Oct 2 19:26:26.214548 kernel: Asymmetric key parser 'x509' registered Oct 2 19:26:26.214584 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:26:26.214601 kernel: io scheduler mq-deadline registered Oct 2 19:26:26.214623 kernel: io scheduler kyber registered Oct 2 19:26:26.214639 kernel: io scheduler bfq registered Oct 2 19:26:26.214850 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 2 19:26:26.214875 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:26:26.214893 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:26:26.214910 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:26:26.214927 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 2 19:26:26.215129 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 2 19:26:26.215158 kernel: printk: console [ttyS0] disabled Oct 2 19:26:26.215176 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 2 19:26:26.215193 kernel: printk: console [ttyS0] enabled Oct 2 19:26:26.215211 kernel: printk: bootconsole [uart0] disabled Oct 2 19:26:26.215229 kernel: thunder_xcv, ver 1.0 Oct 2 19:26:26.215246 kernel: thunder_bgx, ver 1.0 Oct 2 19:26:26.215262 kernel: nicpf, ver 1.0 Oct 2 19:26:26.215279 kernel: nicvf, ver 1.0 Oct 2 19:26:26.215516 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:26:26.215750 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:26:25 UTC (1696274785) Oct 2 19:26:26.215777 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:26:26.215795 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:26:26.215813 kernel: Segment Routing with IPv6 Oct 2 19:26:26.215830 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:26:26.215848 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:26:26.215865 kernel: Key type dns_resolver registered Oct 2 19:26:26.215882 kernel: registered taskstats version 1 Oct 2 19:26:26.215905 kernel: Loading compiled-in X.509 certificates Oct 2 19:26:26.215922 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:26:26.215940 kernel: Key type .fscrypt registered Oct 2 19:26:26.215958 kernel: Key type fscrypt-provisioning registered Oct 2 19:26:26.215975 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:26:26.215992 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:26:26.216009 kernel: ima: No architecture policies found Oct 2 19:26:26.216027 kernel: Freeing unused kernel memory: 34560K Oct 2 19:26:26.216044 kernel: Run /init as init process Oct 2 19:26:26.216065 kernel: with arguments: Oct 2 19:26:26.216083 kernel: /init Oct 2 19:26:26.216099 kernel: with environment: Oct 2 19:26:26.216116 kernel: HOME=/ Oct 2 19:26:26.216134 kernel: TERM=linux Oct 2 19:26:26.216151 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:26:26.216173 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:26:26.216194 systemd[1]: Detected virtualization amazon. Oct 2 19:26:26.216217 systemd[1]: Detected architecture arm64. Oct 2 19:26:26.216237 systemd[1]: Running in initrd. Oct 2 19:26:26.216255 systemd[1]: No hostname configured, using default hostname. Oct 2 19:26:26.216274 systemd[1]: Hostname set to . Oct 2 19:26:26.216292 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:26:26.216311 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:26:26.216330 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:26:26.216349 systemd[1]: Reached target cryptsetup.target. Oct 2 19:26:26.216372 systemd[1]: Reached target paths.target. Oct 2 19:26:26.216390 systemd[1]: Reached target slices.target. Oct 2 19:26:26.216408 systemd[1]: Reached target swap.target. Oct 2 19:26:26.216427 systemd[1]: Reached target timers.target. Oct 2 19:26:26.216447 systemd[1]: Listening on iscsid.socket. Oct 2 19:26:26.216466 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:26:26.216484 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:26:26.216504 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:26:26.216527 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:26:26.216546 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:26:26.216595 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:26:26.216614 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:26:26.216632 systemd[1]: Reached target sockets.target. Oct 2 19:26:26.216650 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:26:26.216669 systemd[1]: Finished network-cleanup.service. Oct 2 19:26:26.216689 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:26:26.216708 systemd[1]: Starting systemd-journald.service... Oct 2 19:26:26.216732 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:26:26.216749 systemd[1]: Starting systemd-resolved.service... Oct 2 19:26:26.216767 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:26:26.216785 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:26:26.216805 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:26:26.216824 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:26:26.216843 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:26:26.216862 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:26:26.216880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:26:26.216902 kernel: audit: type=1130 audit(1696274786.170:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.216920 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:26:26.216942 systemd-journald[308]: Journal started Oct 2 19:26:26.217030 systemd-journald[308]: Runtime Journal (/run/log/journal/ec288522476c28ff868ce35939698cff) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:26:26.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.142200 systemd-modules-load[309]: Inserted module 'overlay' Oct 2 19:26:26.221390 systemd[1]: Started systemd-journald.service. Oct 2 19:26:26.214387 systemd-resolved[310]: Positive Trust Anchors: Oct 2 19:26:26.214400 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:26:26.214461 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:26:26.236837 kernel: audit: type=1130 audit(1696274786.224:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.236904 kernel: Bridge firewalling registered Oct 2 19:26:26.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.237069 systemd-modules-load[309]: Inserted module 'br_netfilter' Oct 2 19:26:26.264827 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:26:26.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.277290 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:26:26.280571 kernel: audit: type=1130 audit(1696274786.265:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.298586 kernel: SCSI subsystem initialized Oct 2 19:26:26.319177 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:26:26.319246 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:26:26.327599 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:26:26.332910 systemd-modules-load[309]: Inserted module 'dm_multipath' Oct 2 19:26:26.337010 dracut-cmdline[327]: dracut-dracut-053 Oct 2 19:26:26.339186 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:26:26.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.343842 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:26:26.354594 kernel: audit: type=1130 audit(1696274786.341:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.373451 dracut-cmdline[327]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:26:26.385688 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:26:26.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.402580 kernel: audit: type=1130 audit(1696274786.386:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.618584 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:26:26.629589 kernel: iscsi: registered transport (tcp) Oct 2 19:26:26.656355 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:26:26.656429 kernel: QLogic iSCSI HBA Driver Oct 2 19:26:26.813591 kernel: random: crng init done Oct 2 19:26:26.813681 systemd-resolved[310]: Defaulting to hostname 'linux'. Oct 2 19:26:26.817475 systemd[1]: Started systemd-resolved.service. Oct 2 19:26:26.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.821095 systemd[1]: Reached target nss-lookup.target. Oct 2 19:26:26.831829 kernel: audit: type=1130 audit(1696274786.819:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.879065 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:26:26.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.883801 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:26:26.892866 kernel: audit: type=1130 audit(1696274786.881:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:26.977599 kernel: raid6: neonx8 gen() 6417 MB/s Oct 2 19:26:26.995584 kernel: raid6: neonx8 xor() 4673 MB/s Oct 2 19:26:27.013586 kernel: raid6: neonx4 gen() 6492 MB/s Oct 2 19:26:27.031584 kernel: raid6: neonx4 xor() 4858 MB/s Oct 2 19:26:27.049584 kernel: raid6: neonx2 gen() 5789 MB/s Oct 2 19:26:27.067589 kernel: raid6: neonx2 xor() 4471 MB/s Oct 2 19:26:27.085585 kernel: raid6: neonx1 gen() 4475 MB/s Oct 2 19:26:27.103585 kernel: raid6: neonx1 xor() 3632 MB/s Oct 2 19:26:27.121585 kernel: raid6: int64x8 gen() 3410 MB/s Oct 2 19:26:27.139584 kernel: raid6: int64x8 xor() 2077 MB/s Oct 2 19:26:27.157587 kernel: raid6: int64x4 gen() 3788 MB/s Oct 2 19:26:27.175584 kernel: raid6: int64x4 xor() 2181 MB/s Oct 2 19:26:27.193585 kernel: raid6: int64x2 gen() 3596 MB/s Oct 2 19:26:27.211585 kernel: raid6: int64x2 xor() 1939 MB/s Oct 2 19:26:27.229586 kernel: raid6: int64x1 gen() 2775 MB/s Oct 2 19:26:27.249186 kernel: raid6: int64x1 xor() 1448 MB/s Oct 2 19:26:27.249218 kernel: raid6: using algorithm neonx4 gen() 6492 MB/s Oct 2 19:26:27.249250 kernel: raid6: .... xor() 4858 MB/s, rmw enabled Oct 2 19:26:27.251039 kernel: raid6: using neon recovery algorithm Oct 2 19:26:27.269591 kernel: xor: measuring software checksum speed Oct 2 19:26:27.272584 kernel: 8regs : 9349 MB/sec Oct 2 19:26:27.274584 kernel: 32regs : 11105 MB/sec Oct 2 19:26:27.279068 kernel: arm64_neon : 9624 MB/sec Oct 2 19:26:27.279105 kernel: xor: using function: 32regs (11105 MB/sec) Oct 2 19:26:27.369603 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:26:27.407119 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:26:27.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.416000 audit: BPF prog-id=7 op=LOAD Oct 2 19:26:27.420634 kernel: audit: type=1130 audit(1696274787.407:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.420686 kernel: audit: type=1334 audit(1696274787.416:10): prog-id=7 op=LOAD Oct 2 19:26:27.418804 systemd[1]: Starting systemd-udevd.service... Oct 2 19:26:27.416000 audit: BPF prog-id=8 op=LOAD Oct 2 19:26:27.455740 systemd-udevd[509]: Using default interface naming scheme 'v252'. Oct 2 19:26:27.465784 systemd[1]: Started systemd-udevd.service. Oct 2 19:26:27.472213 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:26:27.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.538064 dracut-pre-trigger[523]: rd.md=0: removing MD RAID activation Oct 2 19:26:27.647916 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:26:27.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.651139 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:26:27.764169 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:26:27.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:27.909567 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:26:27.909636 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 2 19:26:27.918850 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 2 19:26:27.918912 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:26:27.930596 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:26:27.930875 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:26:27.936911 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:26:27.936975 kernel: GPT:9289727 != 16777215 Oct 2 19:26:27.936998 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:26:27.939232 kernel: GPT:9289727 != 16777215 Oct 2 19:26:27.943830 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:26:27.944139 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:26:27.947344 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:27.958595 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d2:3d:71:51:b9 Oct 2 19:26:27.961074 (udev-worker)[570]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:26:28.034580 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (567) Oct 2 19:26:28.144571 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:26:28.202914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:26:28.251759 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:26:28.275202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:26:28.280202 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:26:28.297439 systemd[1]: Starting disk-uuid.service... Oct 2 19:26:28.319147 disk-uuid[674]: Primary Header is updated. Oct 2 19:26:28.319147 disk-uuid[674]: Secondary Entries is updated. Oct 2 19:26:28.319147 disk-uuid[674]: Secondary Header is updated. Oct 2 19:26:28.329286 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:29.355602 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:26:29.355927 disk-uuid[675]: The operation has completed successfully. Oct 2 19:26:29.624235 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:26:29.626053 systemd[1]: Finished disk-uuid.service. Oct 2 19:26:29.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:29.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:29.640921 systemd[1]: Starting verity-setup.service... Oct 2 19:26:29.691583 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:26:29.778844 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:26:29.785006 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:26:29.798342 systemd[1]: Finished verity-setup.service. Oct 2 19:26:29.811037 kernel: kauditd_printk_skb: 6 callbacks suppressed Oct 2 19:26:29.811105 kernel: audit: type=1130 audit(1696274789.798:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:29.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:29.890599 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:26:29.891623 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:26:29.892093 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:26:29.894478 systemd[1]: Starting ignition-setup.service... Oct 2 19:26:29.920214 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:26:29.952281 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:29.952346 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:29.952370 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:29.969610 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:30.001236 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:26:30.048509 systemd[1]: Finished ignition-setup.service. Oct 2 19:26:30.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.061418 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:26:30.072101 kernel: audit: type=1130 audit(1696274790.050:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.276774 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:26:30.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.286000 audit: BPF prog-id=9 op=LOAD Oct 2 19:26:30.290835 kernel: audit: type=1130 audit(1696274790.277:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.290898 kernel: audit: type=1334 audit(1696274790.286:20): prog-id=9 op=LOAD Oct 2 19:26:30.289073 systemd[1]: Starting systemd-networkd.service... Oct 2 19:26:30.346889 systemd-networkd[1020]: lo: Link UP Oct 2 19:26:30.346911 systemd-networkd[1020]: lo: Gained carrier Oct 2 19:26:30.348691 systemd-networkd[1020]: Enumeration completed Oct 2 19:26:30.350322 systemd[1]: Started systemd-networkd.service. Oct 2 19:26:30.350839 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:26:30.359624 systemd[1]: Reached target network.target. Oct 2 19:26:30.370382 kernel: audit: type=1130 audit(1696274790.357:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.370425 systemd[1]: Starting iscsiuio.service... Oct 2 19:26:30.374048 systemd-networkd[1020]: eth0: Link UP Oct 2 19:26:30.374069 systemd-networkd[1020]: eth0: Gained carrier Oct 2 19:26:30.391713 systemd[1]: Started iscsiuio.service. Oct 2 19:26:30.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.404944 systemd-networkd[1020]: eth0: DHCPv4 address 172.31.27.68/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:26:30.408622 systemd[1]: Starting iscsid.service... Oct 2 19:26:30.415605 kernel: audit: type=1130 audit(1696274790.395:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.423980 iscsid[1025]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:26:30.423980 iscsid[1025]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:26:30.423980 iscsid[1025]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:26:30.423980 iscsid[1025]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:26:30.423980 iscsid[1025]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:26:30.445164 iscsid[1025]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:26:30.457150 systemd[1]: Started iscsid.service. Oct 2 19:26:30.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.461444 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:26:30.471839 kernel: audit: type=1130 audit(1696274790.457:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.509409 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:26:30.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.512935 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:26:30.525789 kernel: audit: type=1130 audit(1696274790.511:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.522197 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:26:30.524144 systemd[1]: Reached target remote-fs.target. Oct 2 19:26:30.531411 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:26:30.564832 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:26:30.575743 kernel: audit: type=1130 audit(1696274790.563:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.740713 ignition[942]: Ignition 2.14.0 Oct 2 19:26:30.741201 ignition[942]: Stage: fetch-offline Oct 2 19:26:30.741639 ignition[942]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:30.741699 ignition[942]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:30.761219 ignition[942]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:30.763978 ignition[942]: Ignition finished successfully Oct 2 19:26:30.767073 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:26:30.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.771498 systemd[1]: Starting ignition-fetch.service... Oct 2 19:26:30.780676 kernel: audit: type=1130 audit(1696274790.767:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.801586 ignition[1044]: Ignition 2.14.0 Oct 2 19:26:30.801613 ignition[1044]: Stage: fetch Oct 2 19:26:30.801955 ignition[1044]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:30.802020 ignition[1044]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:30.818058 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:30.820470 ignition[1044]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:30.839911 ignition[1044]: INFO : PUT result: OK Oct 2 19:26:30.843505 ignition[1044]: DEBUG : parsed url from cmdline: "" Oct 2 19:26:30.843505 ignition[1044]: INFO : no config URL provided Oct 2 19:26:30.843505 ignition[1044]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:26:30.849448 ignition[1044]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:26:30.849448 ignition[1044]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:30.855252 ignition[1044]: INFO : PUT result: OK Oct 2 19:26:30.856933 ignition[1044]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:26:30.859978 ignition[1044]: INFO : GET result: OK Oct 2 19:26:30.883583 ignition[1044]: DEBUG : parsing config with SHA512: 11babff0b1db4a1c64fbc0d4501608ab6bdc3a74376c4d8e48d3f9aa808baf8118310a8b312323cd7205075c1f35162dd25437fb91eb23f87d1dbe52328b47b6 Oct 2 19:26:30.895380 unknown[1044]: fetched base config from "system" Oct 2 19:26:30.895415 unknown[1044]: fetched base config from "system" Oct 2 19:26:30.896511 ignition[1044]: fetch: fetch complete Oct 2 19:26:30.895431 unknown[1044]: fetched user config from "aws" Oct 2 19:26:30.896526 ignition[1044]: fetch: fetch passed Oct 2 19:26:30.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.904057 systemd[1]: Finished ignition-fetch.service. Oct 2 19:26:30.896658 ignition[1044]: Ignition finished successfully Oct 2 19:26:30.913365 systemd[1]: Starting ignition-kargs.service... Oct 2 19:26:30.946219 ignition[1050]: Ignition 2.14.0 Oct 2 19:26:30.946248 ignition[1050]: Stage: kargs Oct 2 19:26:30.946635 ignition[1050]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:30.946712 ignition[1050]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:30.961761 ignition[1050]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:30.964161 ignition[1050]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:30.967412 ignition[1050]: INFO : PUT result: OK Oct 2 19:26:30.972321 ignition[1050]: kargs: kargs passed Oct 2 19:26:30.972617 ignition[1050]: Ignition finished successfully Oct 2 19:26:30.977003 systemd[1]: Finished ignition-kargs.service. Oct 2 19:26:30.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:30.980984 systemd[1]: Starting ignition-disks.service... Oct 2 19:26:31.011037 ignition[1056]: Ignition 2.14.0 Oct 2 19:26:31.011602 ignition[1056]: Stage: disks Oct 2 19:26:31.011975 ignition[1056]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:31.012034 ignition[1056]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:31.028005 ignition[1056]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:31.030445 ignition[1056]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:31.033849 ignition[1056]: INFO : PUT result: OK Oct 2 19:26:31.039052 ignition[1056]: disks: disks passed Oct 2 19:26:31.039157 ignition[1056]: Ignition finished successfully Oct 2 19:26:31.043384 systemd[1]: Finished ignition-disks.service. Oct 2 19:26:31.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.046715 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:26:31.047054 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:26:31.047199 systemd[1]: Reached target local-fs.target. Oct 2 19:26:31.047901 systemd[1]: Reached target sysinit.target. Oct 2 19:26:31.048978 systemd[1]: Reached target basic.target. Oct 2 19:26:31.050835 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:26:31.111878 systemd-fsck[1064]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:26:31.120067 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:26:31.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.124735 systemd[1]: Mounting sysroot.mount... Oct 2 19:26:31.156015 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:26:31.156869 systemd[1]: Mounted sysroot.mount. Oct 2 19:26:31.160298 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:26:31.186798 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:26:31.189151 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:26:31.189244 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:26:31.189308 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:26:31.210357 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:26:31.237518 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:26:31.242301 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:26:31.270676 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1081) Oct 2 19:26:31.279368 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:31.279434 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:31.281627 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:31.284114 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:26:31.293580 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:31.297170 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:26:31.325789 initrd-setup-root[1112]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:26:31.343166 initrd-setup-root[1120]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:26:31.362472 initrd-setup-root[1128]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:26:31.602453 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:26:31.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.607120 systemd[1]: Starting ignition-mount.service... Oct 2 19:26:31.611838 systemd[1]: Starting sysroot-boot.service... Oct 2 19:26:31.642727 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:26:31.642897 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:26:31.687062 systemd[1]: Finished sysroot-boot.service. Oct 2 19:26:31.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.707873 ignition[1149]: INFO : Ignition 2.14.0 Oct 2 19:26:31.707873 ignition[1149]: INFO : Stage: mount Oct 2 19:26:31.711401 ignition[1149]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:31.711401 ignition[1149]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:31.729938 ignition[1149]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:31.732567 ignition[1149]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:31.736246 ignition[1149]: INFO : PUT result: OK Oct 2 19:26:31.741305 ignition[1149]: INFO : mount: mount passed Oct 2 19:26:31.743578 ignition[1149]: INFO : Ignition finished successfully Oct 2 19:26:31.744132 systemd[1]: Finished ignition-mount.service. Oct 2 19:26:31.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:31.751134 systemd[1]: Starting ignition-files.service... Oct 2 19:26:31.775004 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:26:31.799070 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1156) Oct 2 19:26:31.804580 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:26:31.804613 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:26:31.804637 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:26:31.813587 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:26:31.818591 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:26:31.862715 ignition[1175]: INFO : Ignition 2.14.0 Oct 2 19:26:31.862715 ignition[1175]: INFO : Stage: files Oct 2 19:26:31.866146 ignition[1175]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:31.866146 ignition[1175]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:31.884527 ignition[1175]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:31.887148 ignition[1175]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:31.891991 ignition[1175]: INFO : PUT result: OK Oct 2 19:26:31.896481 ignition[1175]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:26:31.901061 ignition[1175]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:26:31.903968 ignition[1175]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:26:31.945959 ignition[1175]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:26:31.949174 ignition[1175]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:26:31.953284 unknown[1175]: wrote ssh authorized keys file for user: core Oct 2 19:26:31.955623 ignition[1175]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:26:31.958837 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:26:31.962864 ignition[1175]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:26:32.123879 systemd-networkd[1020]: eth0: Gained IPv6LL Oct 2 19:26:32.142092 ignition[1175]: INFO : GET result: OK Oct 2 19:26:32.646316 ignition[1175]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:26:32.651180 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:26:32.651180 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:26:32.659041 ignition[1175]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 19:26:32.937886 ignition[1175]: INFO : GET result: OK Oct 2 19:26:33.112923 ignition[1175]: DEBUG : file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 19:26:33.118057 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:26:33.118057 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:26:33.118057 ignition[1175]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:33.136119 ignition[1175]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3241583413" Oct 2 19:26:33.143029 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1177) Oct 2 19:26:33.143065 ignition[1175]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3241583413": device or resource busy Oct 2 19:26:33.143065 ignition[1175]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3241583413", trying btrfs: device or resource busy Oct 2 19:26:33.143065 ignition[1175]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3241583413" Oct 2 19:26:33.152827 ignition[1175]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3241583413" Oct 2 19:26:33.157181 ignition[1175]: INFO : op(3): [started] unmounting "/mnt/oem3241583413" Oct 2 19:26:33.157181 ignition[1175]: INFO : op(3): [finished] unmounting "/mnt/oem3241583413" Oct 2 19:26:33.157181 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:26:33.166249 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:26:33.166249 ignition[1175]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:26:33.178049 systemd[1]: mnt-oem3241583413.mount: Deactivated successfully. Oct 2 19:26:33.252416 ignition[1175]: INFO : GET result: OK Oct 2 19:26:33.886992 ignition[1175]: DEBUG : file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 19:26:33.891798 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:26:33.895253 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:26:33.898758 ignition[1175]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:26:33.958315 ignition[1175]: INFO : GET result: OK Oct 2 19:26:35.683789 ignition[1175]: DEBUG : file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 19:26:35.689043 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:26:35.689043 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:26:35.689043 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:26:35.689043 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:26:35.689043 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:26:35.689043 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:26:35.710415 ignition[1175]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:35.725343 ignition[1175]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3519998359" Oct 2 19:26:35.725343 ignition[1175]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3519998359": device or resource busy Oct 2 19:26:35.725343 ignition[1175]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3519998359", trying btrfs: device or resource busy Oct 2 19:26:35.725343 ignition[1175]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3519998359" Oct 2 19:26:35.745664 ignition[1175]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3519998359" Oct 2 19:26:35.748969 ignition[1175]: INFO : op(6): [started] unmounting "/mnt/oem3519998359" Oct 2 19:26:35.753637 systemd[1]: mnt-oem3519998359.mount: Deactivated successfully. Oct 2 19:26:35.756032 ignition[1175]: INFO : op(6): [finished] unmounting "/mnt/oem3519998359" Oct 2 19:26:35.758451 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:26:35.762538 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:26:35.767155 ignition[1175]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:35.778647 ignition[1175]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem382660060" Oct 2 19:26:35.781661 ignition[1175]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem382660060": device or resource busy Oct 2 19:26:35.785037 ignition[1175]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem382660060", trying btrfs: device or resource busy Oct 2 19:26:35.788706 ignition[1175]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem382660060" Oct 2 19:26:35.791604 ignition[1175]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem382660060" Oct 2 19:26:35.810739 ignition[1175]: INFO : op(9): [started] unmounting "/mnt/oem382660060" Oct 2 19:26:35.810739 ignition[1175]: INFO : op(9): [finished] unmounting "/mnt/oem382660060" Oct 2 19:26:35.810739 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:26:35.810739 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:26:35.810739 ignition[1175]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:26:35.808411 systemd[1]: mnt-oem382660060.mount: Deactivated successfully. Oct 2 19:26:35.863539 ignition[1175]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2113476063" Oct 2 19:26:35.863539 ignition[1175]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2113476063": device or resource busy Oct 2 19:26:35.863539 ignition[1175]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2113476063", trying btrfs: device or resource busy Oct 2 19:26:35.863539 ignition[1175]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2113476063" Oct 2 19:26:35.863539 ignition[1175]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2113476063" Oct 2 19:26:35.863539 ignition[1175]: INFO : op(c): [started] unmounting "/mnt/oem2113476063" Oct 2 19:26:35.863539 ignition[1175]: INFO : op(c): [finished] unmounting "/mnt/oem2113476063" Oct 2 19:26:35.934514 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 2 19:26:35.934623 kernel: audit: type=1130 audit(1696274795.906:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:35.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:35.907418 systemd[1]: Finished ignition-files.service. Oct 2 19:26:35.937901 ignition[1175]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(e): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(e): op(f): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(e): op(f): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(e): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(10): [started] processing unit "nvidia.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(10): [finished] processing unit "nvidia.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(11): [started] processing unit "prepare-critools.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(11): op(12): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(11): op(12): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(11): [finished] processing unit "prepare-critools.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(15): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:26:35.937901 ignition[1175]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:26:36.071395 kernel: audit: type=1130 audit(1696274795.981:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.071435 kernel: audit: type=1130 audit(1696274795.993:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.071460 kernel: audit: type=1131 audit(1696274795.993:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:35.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:35.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:35.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:35.922466 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:26:36.073965 ignition[1175]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:26:36.073965 ignition[1175]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:26:36.073965 ignition[1175]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:26:36.073965 ignition[1175]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:26:36.073965 ignition[1175]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Oct 2 19:26:36.073965 ignition[1175]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:26:36.073965 ignition[1175]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:26:36.073965 ignition[1175]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:26:36.073965 ignition[1175]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:26:36.073965 ignition[1175]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:26:36.073965 ignition[1175]: INFO : files: files passed Oct 2 19:26:36.073965 ignition[1175]: INFO : Ignition finished successfully Oct 2 19:26:35.924464 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:26:36.111632 initrd-setup-root-after-ignition[1200]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:26:35.934690 systemd[1]: Starting ignition-quench.service... Oct 2 19:26:35.972077 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:26:35.992060 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:26:35.992257 systemd[1]: Finished ignition-quench.service. Oct 2 19:26:35.996606 systemd[1]: Reached target ignition-complete.target. Oct 2 19:26:36.015278 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:26:36.150965 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:26:36.151830 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:26:36.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.163891 systemd[1]: Reached target initrd-fs.target. Oct 2 19:26:36.177086 kernel: audit: type=1130 audit(1696274796.153:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.177133 kernel: audit: type=1131 audit(1696274796.162:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.165701 systemd[1]: Reached target initrd.target. Oct 2 19:26:36.167406 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:26:36.185633 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:26:36.223692 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:26:36.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.228389 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:26:36.238223 kernel: audit: type=1130 audit(1696274796.225:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.257300 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:26:36.260953 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:26:36.264751 systemd[1]: Stopped target timers.target. Oct 2 19:26:36.267920 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:26:36.268243 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:26:36.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.276807 systemd[1]: Stopped target initrd.target. Oct 2 19:26:36.294192 kernel: audit: type=1131 audit(1696274796.271:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.283064 systemd[1]: Stopped target basic.target. Oct 2 19:26:36.284818 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:26:36.287871 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:26:36.289969 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:26:36.294265 systemd[1]: Stopped target remote-fs.target. Oct 2 19:26:36.299195 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:26:36.304665 systemd[1]: Stopped target sysinit.target. Oct 2 19:26:36.311261 systemd[1]: Stopped target local-fs.target. Oct 2 19:26:36.314572 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:26:36.317826 systemd[1]: Stopped target swap.target. Oct 2 19:26:36.320875 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:26:36.323028 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:26:36.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.326418 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:26:36.335794 kernel: audit: type=1131 audit(1696274796.325:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.337639 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:26:36.339890 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:26:36.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.346842 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:26:36.356514 kernel: audit: type=1131 audit(1696274796.345:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.347065 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:26:36.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.361512 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:26:36.361737 systemd[1]: Stopped ignition-files.service. Oct 2 19:26:36.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.368333 systemd[1]: Stopping ignition-mount.service... Oct 2 19:26:36.370840 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:26:36.371179 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:26:36.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.375071 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:26:36.376579 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:26:36.376921 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:26:36.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.389919 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:26:36.390238 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:26:36.424481 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:26:36.426783 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:26:36.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.449456 ignition[1213]: INFO : Ignition 2.14.0 Oct 2 19:26:36.451434 ignition[1213]: INFO : Stage: umount Oct 2 19:26:36.453324 ignition[1213]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:26:36.456003 ignition[1213]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:26:36.471734 ignition[1213]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:26:36.474390 ignition[1213]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:26:36.478478 ignition[1213]: INFO : PUT result: OK Oct 2 19:26:36.487265 ignition[1213]: INFO : umount: umount passed Oct 2 19:26:36.489256 ignition[1213]: INFO : Ignition finished successfully Oct 2 19:26:36.492689 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:26:36.494711 systemd[1]: Stopped ignition-mount.service. Oct 2 19:26:36.498124 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:26:36.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.498230 systemd[1]: Stopped ignition-disks.service. Oct 2 19:26:36.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.515479 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:26:36.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.515580 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:26:36.531404 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:26:36.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.531500 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:26:36.536483 systemd[1]: Stopped target network.target. Oct 2 19:26:36.540583 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:26:36.540684 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:26:36.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.544517 systemd[1]: Stopped target paths.target. Oct 2 19:26:36.549298 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:26:36.551679 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:26:36.555099 systemd[1]: Stopped target slices.target. Oct 2 19:26:36.556733 systemd[1]: Stopped target sockets.target. Oct 2 19:26:36.559687 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:26:36.561069 systemd[1]: Closed iscsid.socket. Oct 2 19:26:36.565181 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:26:36.565242 systemd[1]: Closed iscsiuio.socket. Oct 2 19:26:36.568273 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:26:36.571657 systemd[1]: Stopped ignition-setup.service. Oct 2 19:26:36.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.575054 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:26:36.578398 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:26:36.582208 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:26:36.582422 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:26:36.584051 systemd-networkd[1020]: eth0: DHCPv6 lease lost Oct 2 19:26:36.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.593175 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:26:36.593385 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:26:36.595937 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:26:36.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.607000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:26:36.607000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:26:36.596126 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:26:36.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.600033 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:26:36.600124 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:26:36.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.603124 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:26:36.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.603208 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:26:36.607214 systemd[1]: Stopping network-cleanup.service... Oct 2 19:26:36.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.615137 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:26:36.615257 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:26:36.618801 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:26:36.618963 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:26:36.623072 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:26:36.623157 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:26:36.626353 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:26:36.640060 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:26:36.640362 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:26:36.661038 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:26:36.663134 systemd[1]: Stopped network-cleanup.service. Oct 2 19:26:36.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.666759 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:26:36.666858 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:26:36.671638 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:26:36.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.671733 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:26:36.676354 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:26:36.676436 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:26:36.678535 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:26:36.678707 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:26:36.681914 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:26:36.681993 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:26:36.684992 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:26:36.686720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:26:36.686822 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:26:36.711696 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:26:36.711882 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:26:36.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:36.735381 systemd[1]: mnt-oem2113476063.mount: Deactivated successfully. Oct 2 19:26:36.735576 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:26:36.737503 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:26:36.742708 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:26:36.747613 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:26:36.771871 systemd[1]: Switching root. Oct 2 19:26:36.807330 iscsid[1025]: iscsid shutting down. Oct 2 19:26:36.809112 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Oct 2 19:26:36.809203 systemd-journald[308]: Journal stopped Oct 2 19:26:42.776135 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:26:42.776769 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:26:42.776813 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:26:42.776844 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:26:42.776880 kernel: SELinux: policy capability open_perms=1 Oct 2 19:26:42.776915 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:26:42.776947 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:26:42.777046 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:26:42.777082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:26:42.777112 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:26:42.777144 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:26:42.777178 systemd[1]: Successfully loaded SELinux policy in 76.897ms. Oct 2 19:26:42.777404 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.841ms. Oct 2 19:26:42.777444 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:26:42.777476 systemd[1]: Detected virtualization amazon. Oct 2 19:26:42.777586 systemd[1]: Detected architecture arm64. Oct 2 19:26:42.777622 systemd[1]: Detected first boot. Oct 2 19:26:42.777653 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:26:42.777684 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:26:42.777715 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:26:42.777755 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:26:42.777791 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:26:42.777891 kernel: kauditd_printk_skb: 37 callbacks suppressed Oct 2 19:26:42.777927 kernel: audit: type=1334 audit(1696274802.225:81): prog-id=12 op=LOAD Oct 2 19:26:42.777957 kernel: audit: type=1334 audit(1696274802.230:82): prog-id=3 op=UNLOAD Oct 2 19:26:42.777988 kernel: audit: type=1334 audit(1696274802.230:83): prog-id=13 op=LOAD Oct 2 19:26:42.778018 kernel: audit: type=1334 audit(1696274802.230:84): prog-id=14 op=LOAD Oct 2 19:26:42.778048 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:26:42.778146 kernel: audit: type=1334 audit(1696274802.230:85): prog-id=4 op=UNLOAD Oct 2 19:26:42.778180 systemd[1]: Stopped iscsiuio.service. Oct 2 19:26:42.778213 kernel: audit: type=1334 audit(1696274802.230:86): prog-id=5 op=UNLOAD Oct 2 19:26:42.778243 kernel: audit: type=1334 audit(1696274802.238:87): prog-id=15 op=LOAD Oct 2 19:26:42.778271 kernel: audit: type=1334 audit(1696274802.238:88): prog-id=12 op=UNLOAD Oct 2 19:26:42.778304 kernel: audit: type=1334 audit(1696274802.238:89): prog-id=16 op=LOAD Oct 2 19:26:42.778332 kernel: audit: type=1334 audit(1696274802.238:90): prog-id=17 op=LOAD Oct 2 19:26:42.778362 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:26:42.778398 systemd[1]: Stopped iscsid.service. Oct 2 19:26:42.778428 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:26:42.778460 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:26:42.778491 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:26:42.778521 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:26:42.778571 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:26:42.778606 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:26:42.778638 systemd[1]: Created slice system-getty.slice. Oct 2 19:26:42.778676 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:26:42.778709 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:26:42.778738 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:26:42.778768 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:26:42.778800 systemd[1]: Created slice user.slice. Oct 2 19:26:42.778830 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:26:42.778860 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:26:42.778890 systemd[1]: Set up automount boot.automount. Oct 2 19:26:42.778919 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:26:42.778954 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:26:42.778986 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:26:42.779015 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:26:42.779047 systemd[1]: Reached target integritysetup.target. Oct 2 19:26:42.779077 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:26:42.779110 systemd[1]: Reached target remote-fs.target. Oct 2 19:26:42.779142 systemd[1]: Reached target slices.target. Oct 2 19:26:42.779171 systemd[1]: Reached target swap.target. Oct 2 19:26:42.779200 systemd[1]: Reached target torcx.target. Oct 2 19:26:42.779233 systemd[1]: Reached target veritysetup.target. Oct 2 19:26:42.779263 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:26:42.779292 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:26:42.779326 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:26:42.779358 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:26:42.779405 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:26:42.779438 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:26:42.779470 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:26:42.779500 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:26:42.779532 systemd[1]: Mounting media.mount... Oct 2 19:26:42.779583 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:26:42.779615 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:26:42.779645 systemd[1]: Mounting tmp.mount... Oct 2 19:26:42.779676 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:26:42.779706 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:26:42.779735 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:26:42.779767 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:26:42.779796 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:26:42.779825 systemd[1]: Starting modprobe@drm.service... Oct 2 19:26:42.779858 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:26:42.779890 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:26:42.779920 systemd[1]: Starting modprobe@loop.service... Oct 2 19:26:42.779951 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:26:42.779980 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:26:42.780015 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:26:42.780047 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:26:42.780078 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:26:42.780108 systemd[1]: Stopped systemd-journald.service. Oct 2 19:26:42.780144 systemd[1]: Starting systemd-journald.service... Oct 2 19:26:42.780176 kernel: loop: module loaded Oct 2 19:26:42.780206 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:26:42.780240 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:26:42.780270 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:26:42.780303 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:26:42.780335 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:26:42.780365 systemd[1]: Stopped verity-setup.service. Oct 2 19:26:42.780397 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:26:42.780430 kernel: fuse: init (API version 7.34) Oct 2 19:26:42.780462 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:26:42.780491 systemd[1]: Mounted media.mount. Oct 2 19:26:42.780522 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:26:42.811694 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:26:42.811766 systemd[1]: Mounted tmp.mount. Oct 2 19:26:42.811800 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:26:42.811842 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:26:42.811877 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:26:42.811908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:26:42.811943 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:26:42.811973 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:26:42.812003 systemd[1]: Finished modprobe@drm.service. Oct 2 19:26:42.812033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:26:42.812069 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:26:42.812099 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:26:42.812129 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:26:42.812159 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:26:42.812188 systemd[1]: Finished modprobe@loop.service. Oct 2 19:26:42.812218 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:26:42.812252 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:26:42.812282 systemd[1]: Reached target network-pre.target. Oct 2 19:26:42.812312 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:26:42.812342 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:26:42.812374 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:26:42.812405 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:26:42.812435 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:26:42.812471 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:26:42.812503 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:26:42.812534 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:26:42.812595 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:26:42.812628 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:26:42.812661 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:26:42.812703 systemd-journald[1320]: Journal started Oct 2 19:26:42.812816 systemd-journald[1320]: Runtime Journal (/run/log/journal/ec288522476c28ff868ce35939698cff) is 8.0M, max 75.4M, 67.4M free. Oct 2 19:26:37.652000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:26:37.809000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:26:37.809000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:26:37.809000 audit: BPF prog-id=10 op=LOAD Oct 2 19:26:37.809000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:26:37.809000 audit: BPF prog-id=11 op=LOAD Oct 2 19:26:37.809000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:26:42.225000 audit: BPF prog-id=12 op=LOAD Oct 2 19:26:42.230000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:26:42.230000 audit: BPF prog-id=13 op=LOAD Oct 2 19:26:42.230000 audit: BPF prog-id=14 op=LOAD Oct 2 19:26:42.230000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:26:42.230000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:26:42.238000 audit: BPF prog-id=15 op=LOAD Oct 2 19:26:42.238000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:26:42.238000 audit: BPF prog-id=16 op=LOAD Oct 2 19:26:42.238000 audit: BPF prog-id=17 op=LOAD Oct 2 19:26:42.238000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:26:42.238000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:26:42.241000 audit: BPF prog-id=18 op=LOAD Oct 2 19:26:42.241000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:26:42.241000 audit: BPF prog-id=19 op=LOAD Oct 2 19:26:42.241000 audit: BPF prog-id=20 op=LOAD Oct 2 19:26:42.241000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:26:42.241000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:26:42.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.251000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:26:42.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.823667 systemd[1]: Started systemd-journald.service. Oct 2 19:26:42.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.565000 audit: BPF prog-id=21 op=LOAD Oct 2 19:26:42.566000 audit: BPF prog-id=22 op=LOAD Oct 2 19:26:42.566000 audit: BPF prog-id=23 op=LOAD Oct 2 19:26:42.566000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:26:42.566000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:26:42.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.764000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:26:42.764000 audit[1320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe69b5aa0 a2=4000 a3=1 items=0 ppid=1 pid=1320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:42.764000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:26:42.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.223730 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:26:38.026375 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:26:42.244083 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:26:38.038740 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:26:42.824476 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:26:38.038806 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:26:38.038882 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:26:38.038914 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:26:38.038995 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:26:38.039028 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:26:38.039574 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:26:38.039679 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:26:38.039716 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:26:38.040743 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:26:38.040831 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:26:38.040883 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:26:38.040924 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:26:38.040972 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:26:38.041011 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:26:41.346479 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:41.347060 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:41.347306 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:41.347772 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:26:41.347877 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:26:41.348010 /usr/lib/systemd/system-generators/torcx-generator[1246]: time="2023-10-02T19:26:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:26:42.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.871890 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:26:42.874687 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:26:42.887322 systemd-journald[1320]: Time spent on flushing to /var/log/journal/ec288522476c28ff868ce35939698cff is 65.454ms for 1145 entries. Oct 2 19:26:42.887322 systemd-journald[1320]: System Journal (/var/log/journal/ec288522476c28ff868ce35939698cff) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:26:42.971962 systemd-journald[1320]: Received client request to flush runtime journal. Oct 2 19:26:42.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:42.918957 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:26:42.974412 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:26:42.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:43.022720 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:26:43.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:43.026876 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:26:43.029458 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:26:43.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:43.036207 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:26:43.066900 udevadm[1363]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:26:43.202304 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:26:43.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:43.812422 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:26:43.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:43.814000 audit: BPF prog-id=24 op=LOAD Oct 2 19:26:43.814000 audit: BPF prog-id=25 op=LOAD Oct 2 19:26:43.814000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:26:43.814000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:26:43.817027 systemd[1]: Starting systemd-udevd.service... Oct 2 19:26:43.864982 systemd-udevd[1365]: Using default interface naming scheme 'v252'. Oct 2 19:26:43.929152 systemd[1]: Started systemd-udevd.service. Oct 2 19:26:43.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:43.931000 audit: BPF prog-id=26 op=LOAD Oct 2 19:26:43.933992 systemd[1]: Starting systemd-networkd.service... Oct 2 19:26:43.954000 audit: BPF prog-id=27 op=LOAD Oct 2 19:26:43.954000 audit: BPF prog-id=28 op=LOAD Oct 2 19:26:43.955000 audit: BPF prog-id=29 op=LOAD Oct 2 19:26:43.957874 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:26:44.070475 systemd[1]: Started systemd-userdbd.service. Oct 2 19:26:44.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.078215 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:26:44.138845 (udev-worker)[1382]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:26:44.303927 systemd-networkd[1367]: lo: Link UP Oct 2 19:26:44.304407 systemd-networkd[1367]: lo: Gained carrier Oct 2 19:26:44.305492 systemd-networkd[1367]: Enumeration completed Oct 2 19:26:44.305935 systemd[1]: Started systemd-networkd.service. Oct 2 19:26:44.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.308469 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:26:44.310204 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:26:44.326436 systemd-networkd[1367]: eth0: Link UP Oct 2 19:26:44.326670 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:26:44.327098 systemd-networkd[1367]: eth0: Gained carrier Oct 2 19:26:44.346848 systemd-networkd[1367]: eth0: DHCPv4 address 172.31.27.68/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:26:44.460643 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1372) Oct 2 19:26:44.665291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:26:44.668305 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:26:44.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.672647 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:26:44.735703 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:26:44.774609 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:26:44.776848 systemd[1]: Reached target cryptsetup.target. Oct 2 19:26:44.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.781070 systemd[1]: Starting lvm2-activation.service... Oct 2 19:26:44.796056 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:26:44.834706 systemd[1]: Finished lvm2-activation.service. Oct 2 19:26:44.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:44.836822 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:26:44.838698 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:26:44.838752 systemd[1]: Reached target local-fs.target. Oct 2 19:26:44.840519 systemd[1]: Reached target machines.target. Oct 2 19:26:44.844932 systemd[1]: Starting ldconfig.service... Oct 2 19:26:44.847170 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:26:44.847316 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:44.849721 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:26:44.853595 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:26:44.859192 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:26:44.861319 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:26:44.861439 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:26:44.863892 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:26:44.909736 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1487 (bootctl) Oct 2 19:26:44.912753 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:26:44.938483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:26:44.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.003380 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:26:45.022496 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:26:45.046873 systemd-tmpfiles[1490]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:26:45.055596 systemd-fsck[1495]: fsck.fat 4.2 (2021-01-31) Oct 2 19:26:45.055596 systemd-fsck[1495]: /dev/nvme0n1p1: 236 files, 113463/258078 clusters Oct 2 19:26:45.063848 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:26:45.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.069582 systemd[1]: Mounting boot.mount... Oct 2 19:26:45.109936 systemd[1]: Mounted boot.mount. Oct 2 19:26:45.143861 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:26:45.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.396700 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:26:45.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.401272 systemd[1]: Starting audit-rules.service... Oct 2 19:26:45.405222 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:26:45.410246 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:26:45.412000 audit: BPF prog-id=30 op=LOAD Oct 2 19:26:45.420000 audit: BPF prog-id=31 op=LOAD Oct 2 19:26:45.417177 systemd[1]: Starting systemd-resolved.service... Oct 2 19:26:45.424206 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:26:45.428251 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:26:45.512711 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:26:45.514959 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:26:45.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.553000 audit[1515]: SYSTEM_BOOT pid=1515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.559284 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:26:45.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.593010 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:26:45.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:45.595084 systemd[1]: Reached target time-set.target. Oct 2 19:26:45.667721 systemd-resolved[1513]: Positive Trust Anchors: Oct 2 19:26:45.667743 systemd-resolved[1513]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:26:45.667797 systemd-resolved[1513]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:26:45.769881 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:26:45.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.334190 systemd-timesyncd[1514]: Contacted time server 204.93.207.12:123 (0.flatcar.pool.ntp.org). Oct 2 19:26:46.334308 systemd-timesyncd[1514]: Initial clock synchronization to Mon 2023-10-02 19:26:46.333997 UTC. Oct 2 19:26:46.410425 systemd-resolved[1513]: Defaulting to hostname 'linux'. Oct 2 19:26:46.415011 systemd[1]: Started systemd-resolved.service. Oct 2 19:26:46.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:26:46.417115 systemd[1]: Reached target network.target. Oct 2 19:26:46.418849 systemd[1]: Reached target nss-lookup.target. Oct 2 19:26:46.424000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:26:46.424000 audit[1531]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd751e4c0 a2=420 a3=0 items=0 ppid=1510 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:26:46.424000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:26:46.425493 augenrules[1531]: No rules Oct 2 19:26:46.427386 systemd[1]: Finished audit-rules.service. Oct 2 19:26:46.465680 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:26:46.466749 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:26:46.774581 ldconfig[1486]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:26:46.780246 systemd[1]: Finished ldconfig.service. Oct 2 19:26:46.784565 systemd[1]: Starting systemd-update-done.service... Oct 2 19:26:46.806639 systemd[1]: Finished systemd-update-done.service. Oct 2 19:26:46.808970 systemd[1]: Reached target sysinit.target. Oct 2 19:26:46.811032 systemd[1]: Started motdgen.path. Oct 2 19:26:46.812827 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:26:46.815685 systemd[1]: Started logrotate.timer. Oct 2 19:26:46.817781 systemd[1]: Started mdadm.timer. Oct 2 19:26:46.819676 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:26:46.821650 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:26:46.821870 systemd[1]: Reached target paths.target. Oct 2 19:26:46.823641 systemd[1]: Reached target timers.target. Oct 2 19:26:46.826415 systemd[1]: Listening on dbus.socket. Oct 2 19:26:46.830236 systemd[1]: Starting docker.socket... Oct 2 19:26:46.836902 systemd-networkd[1367]: eth0: Gained IPv6LL Oct 2 19:26:46.840668 systemd[1]: Listening on sshd.socket. Oct 2 19:26:46.842533 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:46.843864 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:26:46.846429 systemd[1]: Listening on docker.socket. Oct 2 19:26:46.848304 systemd[1]: Reached target network-online.target. Oct 2 19:26:46.850271 systemd[1]: Reached target sockets.target. Oct 2 19:26:46.852053 systemd[1]: Reached target basic.target. Oct 2 19:26:46.853919 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:26:46.853988 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:26:46.856191 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:26:46.863858 systemd[1]: Starting containerd.service... Oct 2 19:26:46.867919 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:26:46.877524 systemd[1]: Starting dbus.service... Oct 2 19:26:46.885942 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:26:46.892827 systemd[1]: Starting extend-filesystems.service... Oct 2 19:26:46.894661 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:26:46.897093 systemd[1]: Starting motdgen.service... Oct 2 19:26:46.902126 systemd[1]: Started nvidia.service. Oct 2 19:26:46.908950 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:26:46.912912 systemd[1]: Starting prepare-critools.service... Oct 2 19:26:46.919399 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:26:46.923371 systemd[1]: Starting sshd-keygen.service... Oct 2 19:26:46.937672 systemd[1]: Starting systemd-logind.service... Oct 2 19:26:46.939418 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:26:46.939556 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:26:46.940437 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:26:46.942022 systemd[1]: Starting update-engine.service... Oct 2 19:26:46.948862 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:26:47.032533 jq[1554]: true Oct 2 19:26:47.036349 tar[1556]: ./ Oct 2 19:26:47.036349 tar[1556]: ./macvlan Oct 2 19:26:47.060083 jq[1544]: false Oct 2 19:26:47.062258 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:26:47.062664 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:26:47.077519 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:26:47.077905 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:26:47.148932 tar[1557]: crictl Oct 2 19:26:47.170179 jq[1568]: true Oct 2 19:26:47.200351 dbus-daemon[1543]: [system] SELinux support is enabled Oct 2 19:26:47.204837 systemd[1]: Started dbus.service. Oct 2 19:26:47.210041 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:26:47.210106 systemd[1]: Reached target system-config.target. Oct 2 19:26:47.212312 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:26:47.212360 systemd[1]: Reached target user-config.target. Oct 2 19:26:47.222697 dbus-daemon[1543]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1367 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:26:47.238518 dbus-daemon[1543]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 19:26:47.250169 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:26:47.279911 extend-filesystems[1545]: Found nvme0n1 Oct 2 19:26:47.283297 extend-filesystems[1545]: Found nvme0n1p1 Oct 2 19:26:47.286381 extend-filesystems[1545]: Found nvme0n1p2 Oct 2 19:26:47.288301 extend-filesystems[1545]: Found nvme0n1p3 Oct 2 19:26:47.289958 extend-filesystems[1545]: Found usr Oct 2 19:26:47.293393 extend-filesystems[1545]: Found nvme0n1p4 Oct 2 19:26:47.295113 extend-filesystems[1545]: Found nvme0n1p6 Oct 2 19:26:47.295113 extend-filesystems[1545]: Found nvme0n1p7 Oct 2 19:26:47.295113 extend-filesystems[1545]: Found nvme0n1p9 Oct 2 19:26:47.301164 extend-filesystems[1545]: Checking size of /dev/nvme0n1p9 Oct 2 19:26:47.330023 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:26:47.330407 systemd[1]: Finished motdgen.service. Oct 2 19:26:47.377641 update_engine[1553]: I1002 19:26:47.377125 1553 main.cc:92] Flatcar Update Engine starting Oct 2 19:26:47.412351 systemd[1]: Started update-engine.service. Oct 2 19:26:47.417135 systemd[1]: Started locksmithd.service. Oct 2 19:26:47.419677 update_engine[1553]: I1002 19:26:47.419610 1553 update_check_scheduler.cc:74] Next update check in 11m4s Oct 2 19:26:47.457717 amazon-ssm-agent[1540]: 2023/10/02 19:26:47 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:26:47.459364 extend-filesystems[1545]: Resized partition /dev/nvme0n1p9 Oct 2 19:26:47.480525 amazon-ssm-agent[1540]: Initializing new seelog logger Oct 2 19:26:47.480829 amazon-ssm-agent[1540]: New Seelog Logger Creation Complete Oct 2 19:26:47.481006 amazon-ssm-agent[1540]: 2023/10/02 19:26:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:26:47.481006 amazon-ssm-agent[1540]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:26:47.481497 amazon-ssm-agent[1540]: 2023/10/02 19:26:47 processing appconfig overrides Oct 2 19:26:47.491885 extend-filesystems[1603]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:26:47.530767 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:26:47.586739 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:26:47.617551 tar[1556]: ./static Oct 2 19:26:47.618003 extend-filesystems[1603]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:26:47.618003 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:26:47.618003 extend-filesystems[1603]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:26:47.625863 extend-filesystems[1545]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:26:47.637235 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:26:47.637624 systemd[1]: Finished extend-filesystems.service. Oct 2 19:26:47.646470 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:26:47.648246 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:26:47.651176 env[1559]: time="2023-10-02T19:26:47.651086408Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:26:47.662228 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:26:47.665534 systemd-logind[1552]: New seat seat0. Oct 2 19:26:47.680983 systemd[1]: Started systemd-logind.service. Oct 2 19:26:47.752458 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:26:47.791803 tar[1556]: ./vlan Oct 2 19:26:47.847385 dbus-daemon[1543]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:26:47.847694 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:26:47.857855 dbus-daemon[1543]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1585 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:26:47.863072 systemd[1]: Starting polkit.service... Oct 2 19:26:47.906427 env[1559]: time="2023-10-02T19:26:47.906242818Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:26:47.906588 env[1559]: time="2023-10-02T19:26:47.906556246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:47.944427 env[1559]: time="2023-10-02T19:26:47.944313994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:47.944591 env[1559]: time="2023-10-02T19:26:47.944437558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:47.945111 env[1559]: time="2023-10-02T19:26:47.945022702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:47.945243 env[1559]: time="2023-10-02T19:26:47.945104134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:47.945243 env[1559]: time="2023-10-02T19:26:47.945163810Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:26:47.945243 env[1559]: time="2023-10-02T19:26:47.945190702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:47.945540 env[1559]: time="2023-10-02T19:26:47.945495142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:47.946522 env[1559]: time="2023-10-02T19:26:47.946456942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:26:47.947130 env[1559]: time="2023-10-02T19:26:47.947068678Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:26:47.947320 env[1559]: time="2023-10-02T19:26:47.947143642Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:26:47.947383 env[1559]: time="2023-10-02T19:26:47.947343658Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:26:47.947451 env[1559]: time="2023-10-02T19:26:47.947372518Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:26:47.958727 env[1559]: time="2023-10-02T19:26:47.958638478Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:26:47.958911 env[1559]: time="2023-10-02T19:26:47.958737058Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:26:47.958911 env[1559]: time="2023-10-02T19:26:47.958775254Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:26:47.958911 env[1559]: time="2023-10-02T19:26:47.958859770Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.959365 env[1559]: time="2023-10-02T19:26:47.959268142Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.959455 env[1559]: time="2023-10-02T19:26:47.959372506Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.959455 env[1559]: time="2023-10-02T19:26:47.959408758Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.960034 env[1559]: time="2023-10-02T19:26:47.959973838Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.960222 env[1559]: time="2023-10-02T19:26:47.960037678Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.960222 env[1559]: time="2023-10-02T19:26:47.960074038Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.960222 env[1559]: time="2023-10-02T19:26:47.960105478Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.960222 env[1559]: time="2023-10-02T19:26:47.960135346Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:26:47.960436 env[1559]: time="2023-10-02T19:26:47.960384754Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:26:47.960631 env[1559]: time="2023-10-02T19:26:47.960584194Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:26:47.964467 polkitd[1629]: Started polkitd version 121 Oct 2 19:26:47.967057 env[1559]: time="2023-10-02T19:26:47.966958630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:26:47.967188 env[1559]: time="2023-10-02T19:26:47.967078498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.967188 env[1559]: time="2023-10-02T19:26:47.967114546Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:26:47.967319 env[1559]: time="2023-10-02T19:26:47.967289398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.967641 env[1559]: time="2023-10-02T19:26:47.967594558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.967736 env[1559]: time="2023-10-02T19:26:47.967648330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.967736 env[1559]: time="2023-10-02T19:26:47.967680526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.967844 env[1559]: time="2023-10-02T19:26:47.967731178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.967844 env[1559]: time="2023-10-02T19:26:47.967765474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.967844 env[1559]: time="2023-10-02T19:26:47.967794838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.967844 env[1559]: time="2023-10-02T19:26:47.967824658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.968073 env[1559]: time="2023-10-02T19:26:47.967859134Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:26:47.968194 env[1559]: time="2023-10-02T19:26:47.968147614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.968272 env[1559]: time="2023-10-02T19:26:47.968198014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.968272 env[1559]: time="2023-10-02T19:26:47.968232262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.968272 env[1559]: time="2023-10-02T19:26:47.968263462Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:26:47.968420 env[1559]: time="2023-10-02T19:26:47.968296486Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:26:47.968420 env[1559]: time="2023-10-02T19:26:47.968327818Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:26:47.968420 env[1559]: time="2023-10-02T19:26:47.968368642Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:26:47.968595 env[1559]: time="2023-10-02T19:26:47.968446990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:26:47.975291 env[1559]: time="2023-10-02T19:26:47.975129706Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:26:47.975291 env[1559]: time="2023-10-02T19:26:47.975275554Z" level=info msg="Connect containerd service" Oct 2 19:26:47.976841 env[1559]: time="2023-10-02T19:26:47.975489250Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:26:48.004260 polkitd[1629]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:26:48.005723 env[1559]: time="2023-10-02T19:26:48.005501190Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:26:48.006215 env[1559]: time="2023-10-02T19:26:48.006127914Z" level=info msg="Start subscribing containerd event" Oct 2 19:26:48.006311 env[1559]: time="2023-10-02T19:26:48.006233922Z" level=info msg="Start recovering state" Oct 2 19:26:48.006371 env[1559]: time="2023-10-02T19:26:48.006351294Z" level=info msg="Start event monitor" Oct 2 19:26:48.006537 env[1559]: time="2023-10-02T19:26:48.006496434Z" level=info msg="Start snapshots syncer" Oct 2 19:26:48.006606 env[1559]: time="2023-10-02T19:26:48.006532566Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:26:48.006606 env[1559]: time="2023-10-02T19:26:48.006556518Z" level=info msg="Start streaming server" Oct 2 19:26:48.007600 env[1559]: time="2023-10-02T19:26:48.007514634Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:26:48.008202 polkitd[1629]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:26:48.011226 env[1559]: time="2023-10-02T19:26:48.011145486Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:26:48.011523 polkitd[1629]: Finished loading, compiling and executing 2 rules Oct 2 19:26:48.012402 dbus-daemon[1543]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:26:48.012717 systemd[1]: Started polkit.service. Oct 2 19:26:48.018797 polkitd[1629]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:26:48.043960 systemd[1]: Started containerd.service. Oct 2 19:26:48.052594 systemd-hostnamed[1585]: Hostname set to (transient) Oct 2 19:26:48.052833 systemd-resolved[1513]: System hostname changed to 'ip-172-31-27-68'. Oct 2 19:26:48.057696 tar[1556]: ./portmap Oct 2 19:26:48.085500 env[1559]: time="2023-10-02T19:26:48.085399830Z" level=info msg="containerd successfully booted in 0.457390s" Oct 2 19:26:48.133137 coreos-metadata[1542]: Oct 02 19:26:48.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:26:48.138813 coreos-metadata[1542]: Oct 02 19:26:48.138 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:26:48.139860 coreos-metadata[1542]: Oct 02 19:26:48.139 INFO Fetch successful Oct 2 19:26:48.140160 coreos-metadata[1542]: Oct 02 19:26:48.139 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:26:48.141363 coreos-metadata[1542]: Oct 02 19:26:48.141 INFO Fetch successful Oct 2 19:26:48.144606 unknown[1542]: wrote ssh authorized keys file for user: core Oct 2 19:26:48.186331 tar[1556]: ./host-local Oct 2 19:26:48.193435 update-ssh-keys[1654]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:26:48.194545 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:26:48.286117 tar[1556]: ./vrf Oct 2 19:26:48.396538 amazon-ssm-agent[1540]: 2023-10-02 19:26:48 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-05dd446d39e817e56 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-05dd446d39e817e56 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:26:48.396538 amazon-ssm-agent[1540]: status code: 400, request id: 561bcb8f-9cef-4f85-9115-0a012bff61be Oct 2 19:26:48.396538 amazon-ssm-agent[1540]: 2023-10-02 19:26:48 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:26:48.406402 tar[1556]: ./bridge Oct 2 19:26:48.536521 tar[1556]: ./tuning Oct 2 19:26:48.646098 tar[1556]: ./firewall Oct 2 19:26:48.777391 tar[1556]: ./host-device Oct 2 19:26:48.829685 systemd[1]: Finished prepare-critools.service. Oct 2 19:26:48.874471 tar[1556]: ./sbr Oct 2 19:26:48.977061 tar[1556]: ./loopback Oct 2 19:26:49.036525 tar[1556]: ./dhcp Oct 2 19:26:49.170975 tar[1556]: ./ptp Oct 2 19:26:49.222672 tar[1556]: ./ipvlan Oct 2 19:26:49.270643 tar[1556]: ./bandwidth Oct 2 19:26:49.344773 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:26:49.473526 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:26:55.055601 systemd[1]: Created slice system-sshd.slice. Oct 2 19:26:57.515068 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:26:57.573272 systemd[1]: Finished sshd-keygen.service. Oct 2 19:26:57.578049 systemd[1]: Starting issuegen.service... Oct 2 19:26:57.582113 systemd[1]: Started sshd@0-172.31.27.68:22-139.178.89.65:38862.service. Oct 2 19:26:57.602306 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:26:57.602660 systemd[1]: Finished issuegen.service. Oct 2 19:26:57.607349 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:26:57.636786 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:26:57.641639 systemd[1]: Started getty@tty1.service. Oct 2 19:26:57.646410 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:26:57.648851 systemd[1]: Reached target getty.target. Oct 2 19:26:57.650695 systemd[1]: Reached target multi-user.target. Oct 2 19:26:57.655202 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:26:57.685388 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:26:57.685782 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:26:57.688141 systemd[1]: Startup finished in 1.188s (kernel) + 11.919s (initrd) + 19.726s (userspace) = 32.834s. Oct 2 19:26:57.786902 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 38862 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:57.791569 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:57.809699 systemd[1]: Created slice user-500.slice. Oct 2 19:26:57.813108 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:26:57.819799 systemd-logind[1552]: New session 1 of user core. Oct 2 19:26:57.838271 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:26:57.842680 systemd[1]: Starting user@500.service... Oct 2 19:26:57.854840 (systemd)[1755]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:58.056127 systemd[1755]: Queued start job for default target default.target. Oct 2 19:26:58.058086 systemd[1755]: Reached target paths.target. Oct 2 19:26:58.058142 systemd[1755]: Reached target sockets.target. Oct 2 19:26:58.058175 systemd[1755]: Reached target timers.target. Oct 2 19:26:58.058206 systemd[1755]: Reached target basic.target. Oct 2 19:26:58.058300 systemd[1755]: Reached target default.target. Oct 2 19:26:58.058367 systemd[1755]: Startup finished in 185ms. Oct 2 19:26:58.058830 systemd[1]: Started user@500.service. Oct 2 19:26:58.060800 systemd[1]: Started session-1.scope. Oct 2 19:26:58.222974 systemd[1]: Started sshd@1-172.31.27.68:22-139.178.89.65:45114.service. Oct 2 19:26:58.409374 sshd[1764]: Accepted publickey for core from 139.178.89.65 port 45114 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:58.412752 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:58.421855 systemd-logind[1552]: New session 2 of user core. Oct 2 19:26:58.422087 systemd[1]: Started session-2.scope. Oct 2 19:26:58.572599 sshd[1764]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:58.578862 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:26:58.579910 systemd[1]: sshd@1-172.31.27.68:22-139.178.89.65:45114.service: Deactivated successfully. Oct 2 19:26:58.581221 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:26:58.582651 systemd-logind[1552]: Removed session 2. Oct 2 19:26:58.604815 systemd[1]: Started sshd@2-172.31.27.68:22-139.178.89.65:45118.service. Oct 2 19:26:58.792508 sshd[1770]: Accepted publickey for core from 139.178.89.65 port 45118 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:58.795719 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:58.804559 systemd[1]: Started session-3.scope. Oct 2 19:26:58.805523 systemd-logind[1552]: New session 3 of user core. Oct 2 19:26:58.938356 sshd[1770]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:58.944039 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:26:58.944634 systemd[1]: sshd@2-172.31.27.68:22-139.178.89.65:45118.service: Deactivated successfully. Oct 2 19:26:58.945866 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:26:58.947389 systemd-logind[1552]: Removed session 3. Oct 2 19:26:58.968755 systemd[1]: Started sshd@3-172.31.27.68:22-139.178.89.65:45130.service. Oct 2 19:26:59.152340 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 45130 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:59.154905 sshd[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:59.163001 systemd-logind[1552]: New session 4 of user core. Oct 2 19:26:59.164010 systemd[1]: Started session-4.scope. Oct 2 19:26:59.311128 sshd[1776]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:59.317065 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:26:59.318200 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:26:59.318558 systemd[1]: sshd@3-172.31.27.68:22-139.178.89.65:45130.service: Deactivated successfully. Oct 2 19:26:59.321063 systemd-logind[1552]: Removed session 4. Oct 2 19:26:59.342669 systemd[1]: Started sshd@4-172.31.27.68:22-139.178.89.65:45132.service. Oct 2 19:26:59.524280 sshd[1782]: Accepted publickey for core from 139.178.89.65 port 45132 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:59.527497 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:59.535680 systemd-logind[1552]: New session 5 of user core. Oct 2 19:26:59.536619 systemd[1]: Started session-5.scope. Oct 2 19:26:59.665918 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:26:59.666452 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:26:59.680983 dbus-daemon[1543]: avc: received setenforce notice (enforcing=1) Oct 2 19:26:59.683651 sudo[1785]: pam_unix(sudo:session): session closed for user root Oct 2 19:26:59.708180 sshd[1782]: pam_unix(sshd:session): session closed for user core Oct 2 19:26:59.714092 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:26:59.714800 systemd[1]: sshd@4-172.31.27.68:22-139.178.89.65:45132.service: Deactivated successfully. Oct 2 19:26:59.716068 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:26:59.717688 systemd-logind[1552]: Removed session 5. Oct 2 19:26:59.738277 systemd[1]: Started sshd@5-172.31.27.68:22-139.178.89.65:45148.service. Oct 2 19:26:59.918965 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 45148 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:26:59.922374 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:26:59.931809 systemd[1]: Started session-6.scope. Oct 2 19:26:59.932546 systemd-logind[1552]: New session 6 of user core. Oct 2 19:27:00.052486 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:27:00.053155 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:00.060321 sudo[1793]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:00.073930 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:27:00.074925 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:00.099823 systemd[1]: Stopping audit-rules.service... Oct 2 19:27:00.102000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:27:00.106420 kernel: kauditd_printk_skb: 80 callbacks suppressed Oct 2 19:27:00.106481 kernel: audit: type=1305 audit(1696274820.102:167): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:27:00.102000 audit[1796]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdf463990 a2=420 a3=0 items=0 ppid=1 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.122783 kernel: audit: type=1300 audit(1696274820.102:167): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdf463990 a2=420 a3=0 items=0 ppid=1 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.122868 kernel: audit: type=1327 audit(1696274820.102:167): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:27:00.102000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:27:00.126351 auditctl[1796]: No rules Oct 2 19:27:00.127300 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:27:00.127645 systemd[1]: Stopped audit-rules.service. Oct 2 19:27:00.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.135859 systemd[1]: Starting audit-rules.service... Oct 2 19:27:00.137188 kernel: audit: type=1131 audit(1696274820.126:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.194725 augenrules[1813]: No rules Oct 2 19:27:00.197004 systemd[1]: Finished audit-rules.service. Oct 2 19:27:00.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.206000 audit[1792]: USER_END pid=1792 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.207411 sudo[1792]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:00.216663 kernel: audit: type=1130 audit(1696274820.195:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.216778 kernel: audit: type=1106 audit(1696274820.206:170): pid=1792 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.216852 kernel: audit: type=1104 audit(1696274820.206:171): pid=1792 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.206000 audit[1792]: CRED_DISP pid=1792 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.231300 sshd[1789]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:00.231000 audit[1789]: USER_END pid=1789 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.246524 systemd[1]: sshd@5-172.31.27.68:22-139.178.89.65:45148.service: Deactivated successfully. Oct 2 19:27:00.231000 audit[1789]: CRED_DISP pid=1789 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.255934 kernel: audit: type=1106 audit(1696274820.231:172): pid=1789 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.256039 kernel: audit: type=1104 audit(1696274820.231:173): pid=1789 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.247699 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:27:00.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.27.68:22-139.178.89.65:45148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.265743 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:27:00.267759 kernel: audit: type=1131 audit(1696274820.245:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.27.68:22-139.178.89.65:45148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.269517 systemd[1]: Started sshd@6-172.31.27.68:22-139.178.89.65:45154.service. Oct 2 19:27:00.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.27.68:22-139.178.89.65:45154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.273489 systemd-logind[1552]: Removed session 6. Oct 2 19:27:00.449000 audit[1819]: USER_ACCT pid=1819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.452574 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 45154 ssh2: RSA SHA256:UWiPcUSyDphe9v2WN1dtuuOFHMYWuZ3ahwMZ2IbYxYo Oct 2 19:27:00.452000 audit[1819]: CRED_ACQ pid=1819 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.452000 audit[1819]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3fdd370 a2=3 a3=1 items=0 ppid=1 pid=1819 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:00.452000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:27:00.455319 sshd[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:27:00.463294 systemd-logind[1552]: New session 7 of user core. Oct 2 19:27:00.464195 systemd[1]: Started session-7.scope. Oct 2 19:27:00.471000 audit[1819]: USER_START pid=1819 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.478000 audit[1821]: CRED_ACQ pid=1821 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:00.582000 audit[1822]: USER_ACCT pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.584622 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:27:00.583000 audit[1822]: CRED_REFR pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:00.585734 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:27:00.587000 audit[1822]: USER_START pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:01.263311 systemd[1]: Reloading. Oct 2 19:27:01.468204 /usr/lib/systemd/system-generators/torcx-generator[1855]: time="2023-10-02T19:27:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:27:01.468805 /usr/lib/systemd/system-generators/torcx-generator[1855]: time="2023-10-02T19:27:01Z" level=info msg="torcx already run" Oct 2 19:27:01.692981 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:27:01.693240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:27:01.731583 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.894000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.895000 audit: BPF prog-id=40 op=LOAD Oct 2 19:27:01.895000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit: BPF prog-id=41 op=LOAD Oct 2 19:27:01.900000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit: BPF prog-id=42 op=LOAD Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.900000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit: BPF prog-id=43 op=LOAD Oct 2 19:27:01.901000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:27:01.901000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.901000 audit: BPF prog-id=44 op=LOAD Oct 2 19:27:01.901000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.906000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit: BPF prog-id=45 op=LOAD Oct 2 19:27:01.907000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit: BPF prog-id=46 op=LOAD Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.907000 audit: BPF prog-id=47 op=LOAD Oct 2 19:27:01.907000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:27:01.907000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.910000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit: BPF prog-id=48 op=LOAD Oct 2 19:27:01.911000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit: BPF prog-id=49 op=LOAD Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.911000 audit: BPF prog-id=50 op=LOAD Oct 2 19:27:01.911000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:27:01.911000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.912000 audit: BPF prog-id=51 op=LOAD Oct 2 19:27:01.912000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit: BPF prog-id=52 op=LOAD Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.913000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.914000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.914000 audit: BPF prog-id=53 op=LOAD Oct 2 19:27:01.914000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:27:01.914000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.916000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit: BPF prog-id=54 op=LOAD Oct 2 19:27:01.917000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.917000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit: BPF prog-id=55 op=LOAD Oct 2 19:27:01.918000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit: BPF prog-id=56 op=LOAD Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:01.918000 audit: BPF prog-id=57 op=LOAD Oct 2 19:27:01.918000 audit: BPF prog-id=36 op=UNLOAD Oct 2 19:27:01.918000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:27:01.957575 systemd[1]: Started kubelet.service. Oct 2 19:27:01.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:01.993559 systemd[1]: Starting coreos-metadata.service... Oct 2 19:27:02.130721 kubelet[1906]: E1002 19:27:02.130602 1906 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:27:02.134664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:27:02.135011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:27:02.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:27:02.189288 coreos-metadata[1914]: Oct 02 19:27:02.189 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:27:02.190350 coreos-metadata[1914]: Oct 02 19:27:02.190 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:27:02.192547 coreos-metadata[1914]: Oct 02 19:27:02.192 INFO Fetch successful Oct 2 19:27:02.192751 coreos-metadata[1914]: Oct 02 19:27:02.192 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:27:02.193211 coreos-metadata[1914]: Oct 02 19:27:02.193 INFO Fetch successful Oct 2 19:27:02.193290 coreos-metadata[1914]: Oct 02 19:27:02.193 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:27:02.193846 coreos-metadata[1914]: Oct 02 19:27:02.193 INFO Fetch successful Oct 2 19:27:02.193929 coreos-metadata[1914]: Oct 02 19:27:02.193 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:27:02.194425 coreos-metadata[1914]: Oct 02 19:27:02.194 INFO Fetch successful Oct 2 19:27:02.194538 coreos-metadata[1914]: Oct 02 19:27:02.194 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:27:02.194991 coreos-metadata[1914]: Oct 02 19:27:02.194 INFO Fetch successful Oct 2 19:27:02.195064 coreos-metadata[1914]: Oct 02 19:27:02.195 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:27:02.195565 coreos-metadata[1914]: Oct 02 19:27:02.195 INFO Fetch successful Oct 2 19:27:02.195641 coreos-metadata[1914]: Oct 02 19:27:02.195 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:27:02.196121 coreos-metadata[1914]: Oct 02 19:27:02.196 INFO Fetch successful Oct 2 19:27:02.196196 coreos-metadata[1914]: Oct 02 19:27:02.196 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:27:02.196966 coreos-metadata[1914]: Oct 02 19:27:02.196 INFO Fetch successful Oct 2 19:27:02.219985 systemd[1]: Finished coreos-metadata.service. Oct 2 19:27:02.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:02.770436 systemd[1]: Stopped kubelet.service. Oct 2 19:27:02.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:02.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:02.813446 systemd[1]: Reloading. Oct 2 19:27:02.992895 /usr/lib/systemd/system-generators/torcx-generator[1970]: time="2023-10-02T19:27:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:27:02.992958 /usr/lib/systemd/system-generators/torcx-generator[1970]: time="2023-10-02T19:27:02Z" level=info msg="torcx already run" Oct 2 19:27:03.251445 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:27:03.251486 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:27:03.289464 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.447000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.448000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.448000 audit: BPF prog-id=58 op=LOAD Oct 2 19:27:03.448000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.453000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit: BPF prog-id=59 op=LOAD Oct 2 19:27:03.454000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit: BPF prog-id=60 op=LOAD Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.454000 audit: BPF prog-id=61 op=LOAD Oct 2 19:27:03.454000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:27:03.454000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.455000 audit: BPF prog-id=62 op=LOAD Oct 2 19:27:03.455000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.459000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit: BPF prog-id=63 op=LOAD Oct 2 19:27:03.460000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.460000 audit: BPF prog-id=64 op=LOAD Oct 2 19:27:03.460000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.461000 audit: BPF prog-id=65 op=LOAD Oct 2 19:27:03.461000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:27:03.461000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit: BPF prog-id=66 op=LOAD Oct 2 19:27:03.464000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit: BPF prog-id=67 op=LOAD Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit: BPF prog-id=68 op=LOAD Oct 2 19:27:03.465000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:27:03.465000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.466000 audit: BPF prog-id=69 op=LOAD Oct 2 19:27:03.466000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit: BPF prog-id=70 op=LOAD Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.467000 audit: BPF prog-id=71 op=LOAD Oct 2 19:27:03.467000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:27:03.467000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.470000 audit: BPF prog-id=72 op=LOAD Oct 2 19:27:03.470000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit: BPF prog-id=73 op=LOAD Oct 2 19:27:03.471000 audit: BPF prog-id=55 op=UNLOAD Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit: BPF prog-id=74 op=LOAD Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:03.472000 audit: BPF prog-id=75 op=LOAD Oct 2 19:27:03.472000 audit: BPF prog-id=56 op=UNLOAD Oct 2 19:27:03.472000 audit: BPF prog-id=57 op=UNLOAD Oct 2 19:27:03.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:03.522638 systemd[1]: Started kubelet.service. Oct 2 19:27:03.665623 kubelet[2026]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:27:03.666334 kubelet[2026]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:27:03.666437 kubelet[2026]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:27:03.666791 kubelet[2026]: I1002 19:27:03.666737 2026 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:27:03.672675 kubelet[2026]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:27:03.672866 kubelet[2026]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:27:03.672963 kubelet[2026]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:27:04.594831 kubelet[2026]: I1002 19:27:04.594772 2026 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:27:04.594831 kubelet[2026]: I1002 19:27:04.594818 2026 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:27:04.595195 kubelet[2026]: I1002 19:27:04.595154 2026 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:27:04.607164 kubelet[2026]: I1002 19:27:04.607108 2026 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:27:04.609316 kubelet[2026]: W1002 19:27:04.609287 2026 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:27:04.610562 kubelet[2026]: I1002 19:27:04.610536 2026 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:27:04.611178 kubelet[2026]: I1002 19:27:04.611157 2026 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:27:04.611408 kubelet[2026]: I1002 19:27:04.611385 2026 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:27:04.611729 kubelet[2026]: I1002 19:27:04.611692 2026 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:27:04.611848 kubelet[2026]: I1002 19:27:04.611829 2026 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:27:04.612099 kubelet[2026]: I1002 19:27:04.612079 2026 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:27:04.621177 kubelet[2026]: I1002 19:27:04.621122 2026 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:27:04.621177 kubelet[2026]: I1002 19:27:04.621167 2026 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:27:04.621447 kubelet[2026]: I1002 19:27:04.621207 2026 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:27:04.621447 kubelet[2026]: I1002 19:27:04.621233 2026 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:27:04.622758 kubelet[2026]: E1002 19:27:04.622663 2026 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:04.622943 kubelet[2026]: E1002 19:27:04.622856 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:04.624555 kubelet[2026]: I1002 19:27:04.624497 2026 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:27:04.625510 kubelet[2026]: W1002 19:27:04.625465 2026 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:27:04.626790 kubelet[2026]: I1002 19:27:04.626659 2026 server.go:1175] "Started kubelet" Oct 2 19:27:04.627946 kubelet[2026]: I1002 19:27:04.627905 2026 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:27:04.627000 audit[2026]: AVC avc: denied { mac_admin } for pid=2026 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:04.627000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:04.627000 audit[2026]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40000495c0 a1=4000af6ee8 a2=4000049590 a3=25 items=0 ppid=1 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.627000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:04.628000 audit[2026]: AVC avc: denied { mac_admin } for pid=2026 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:04.628000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:04.628000 audit[2026]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40004245c0 a1=4000af6f00 a2=4000049650 a3=25 items=0 ppid=1 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.628000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:04.630613 kubelet[2026]: I1002 19:27:04.629841 2026 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:27:04.630613 kubelet[2026]: I1002 19:27:04.629957 2026 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:27:04.630613 kubelet[2026]: I1002 19:27:04.630288 2026 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:27:04.631413 kubelet[2026]: I1002 19:27:04.631378 2026 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:27:04.637423 kubelet[2026]: I1002 19:27:04.637382 2026 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:27:04.639600 kubelet[2026]: I1002 19:27:04.637621 2026 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:27:04.639787 kubelet[2026]: E1002 19:27:04.639037 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:04.644268 kubelet[2026]: E1002 19:27:04.644209 2026 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.31.27.68" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:04.645634 kubelet[2026]: E1002 19:27:04.644584 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73b4eb715", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 626616085, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 626616085, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.645634 kubelet[2026]: W1002 19:27:04.645131 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:04.645634 kubelet[2026]: E1002 19:27:04.645197 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:04.646070 kubelet[2026]: W1002 19:27:04.645299 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.27.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:04.646070 kubelet[2026]: E1002 19:27:04.645353 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:04.646070 kubelet[2026]: W1002 19:27:04.645443 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:04.646070 kubelet[2026]: E1002 19:27:04.645470 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:04.646070 kubelet[2026]: E1002 19:27:04.645742 2026 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:27:04.646070 kubelet[2026]: E1002 19:27:04.645807 2026 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:27:04.661959 kubelet[2026]: E1002 19:27:04.661810 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73c7336c1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 645785281, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 645785281, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.706606 kubelet[2026]: E1002 19:27:04.706483 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff577cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.68 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704653261, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704653261, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.709846 kubelet[2026]: E1002 19:27:04.708690 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5bc49", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.68 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704670793, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704670793, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.710388 kubelet[2026]: I1002 19:27:04.710342 2026 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:27:04.710388 kubelet[2026]: I1002 19:27:04.710380 2026 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:27:04.710544 kubelet[2026]: I1002 19:27:04.710414 2026 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:27:04.710791 kubelet[2026]: E1002 19:27:04.710648 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5ea45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.68 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704682565, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704682565, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.713337 kubelet[2026]: I1002 19:27:04.713284 2026 policy_none.go:49] "None policy: Start" Oct 2 19:27:04.715954 kubelet[2026]: I1002 19:27:04.715918 2026 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:27:04.716132 kubelet[2026]: I1002 19:27:04.716111 2026 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:27:04.715000 audit[2044]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.715000 audit[2044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe2c1ddb0 a2=0 a3=1 items=0 ppid=2026 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:27:04.721000 audit[2046]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.721000 audit[2046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffff8faaa80 a2=0 a3=1 items=0 ppid=2026 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:27:04.725354 systemd[1]: Created slice kubepods.slice. Oct 2 19:27:04.739087 kubelet[2026]: E1002 19:27:04.739028 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:04.739843 kubelet[2026]: I1002 19:27:04.739799 2026 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.68" Oct 2 19:27:04.742270 kubelet[2026]: E1002 19:27:04.742230 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.68" Oct 2 19:27:04.742664 kubelet[2026]: E1002 19:27:04.742230 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff577cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.68 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704653261, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 739743205, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff577cd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.746558 kubelet[2026]: E1002 19:27:04.744876 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5bc49", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.68 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704670793, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 739753345, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5bc49" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.747792 kubelet[2026]: E1002 19:27:04.747649 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5ea45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.68 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704682565, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 739760353, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5ea45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.749859 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:27:04.756225 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:27:04.764101 kubelet[2026]: I1002 19:27:04.764067 2026 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:27:04.762000 audit[2026]: AVC avc: denied { mac_admin } for pid=2026 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:04.762000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:27:04.762000 audit[2026]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c1c5d0 a1=4000af6c00 a2=4000c1c5a0 a3=25 items=0 ppid=1 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.762000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:27:04.764928 kubelet[2026]: I1002 19:27:04.764900 2026 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:27:04.766179 kubelet[2026]: I1002 19:27:04.766149 2026 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:27:04.769086 kubelet[2026]: E1002 19:27:04.769048 2026 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.68\" not found" Oct 2 19:27:04.730000 audit[2048]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.730000 audit[2048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdd228df0 a2=0 a3=1 items=0 ppid=2026 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:27:04.774097 kubelet[2026]: E1002 19:27:04.773917 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f743d1f4fd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 769434877, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 769434877, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.779000 audit[2053]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.779000 audit[2053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffff4def80 a2=0 a3=1 items=0 ppid=2026 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:27:04.839255 kubelet[2026]: E1002 19:27:04.839214 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:04.846104 kubelet[2026]: E1002 19:27:04.845958 2026 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.31.27.68" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:04.844000 audit[2058]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.844000 audit[2058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc7a5fa90 a2=0 a3=1 items=0 ppid=2026 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:27:04.851000 audit[2059]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.851000 audit[2059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff163d710 a2=0 a3=1 items=0 ppid=2026 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:27:04.867000 audit[2062]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.867000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff6691b80 a2=0 a3=1 items=0 ppid=2026 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:27:04.881000 audit[2065]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.881000 audit[2065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffdb828030 a2=0 a3=1 items=0 ppid=2026 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:27:04.885000 audit[2066]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.885000 audit[2066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd4683550 a2=0 a3=1 items=0 ppid=2026 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.885000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:27:04.890000 audit[2067]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.890000 audit[2067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc88c1130 a2=0 a3=1 items=0 ppid=2026 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:27:04.897000 audit[2069]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.897000 audit[2069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd7562b80 a2=0 a3=1 items=0 ppid=2026 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.897000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:27:04.905000 audit[2071]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.905000 audit[2071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd7ad5120 a2=0 a3=1 items=0 ppid=2026 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:27:04.939899 kubelet[2026]: E1002 19:27:04.939820 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:04.939000 audit[2074]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.939000 audit[2074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffe30ecab0 a2=0 a3=1 items=0 ppid=2026 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:27:04.944047 kubelet[2026]: I1002 19:27:04.943988 2026 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.68" Oct 2 19:27:04.946820 kubelet[2026]: E1002 19:27:04.946760 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.68" Oct 2 19:27:04.947138 kubelet[2026]: E1002 19:27:04.947000 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff577cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.68 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704653261, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 943934906, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff577cd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.949002 kubelet[2026]: E1002 19:27:04.948804 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5bc49", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.68 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704670793, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 943943186, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5bc49" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:04.952000 audit[2078]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.952000 audit[2078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffffb6f5760 a2=0 a3=1 items=0 ppid=2026 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.952000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:27:04.973000 audit[2081]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=2081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.973000 audit[2081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffffe003830 a2=0 a3=1 items=0 ppid=2026 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:27:04.975857 kubelet[2026]: I1002 19:27:04.975811 2026 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:27:04.978000 audit[2083]: NETFILTER_CFG table=mangle:17 family=2 entries=1 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.978000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf2bdad0 a2=0 a3=1 items=0 ppid=2026 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.978000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:27:04.979000 audit[2082]: NETFILTER_CFG table=mangle:18 family=10 entries=2 op=nft_register_chain pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.979000 audit[2082]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe147bbb0 a2=0 a3=1 items=0 ppid=2026 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.979000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:27:04.982000 audit[2084]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.982000 audit[2084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea1eac40 a2=0 a3=1 items=0 ppid=2026 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:27:04.986000 audit[2085]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.986000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffb3fcd90 a2=0 a3=1 items=0 ppid=2026 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:27:04.987000 audit[2086]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:04.987000 audit[2086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7d175c0 a2=0 a3=1 items=0 ppid=2026 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.987000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:27:04.997000 audit[2088]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=2088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:04.997000 audit[2088]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcfc509b0 a2=0 a3=1 items=0 ppid=2026 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:04.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:27:05.001000 audit[2089]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.001000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffc41725b0 a2=0 a3=1 items=0 ppid=2026 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:27:05.010000 audit[2091]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.010000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffdcea0850 a2=0 a3=1 items=0 ppid=2026 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:27:05.015000 audit[2092]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.015000 audit[2092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd99f0100 a2=0 a3=1 items=0 ppid=2026 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:27:05.020000 audit[2093]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.020000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce6ad740 a2=0 a3=1 items=0 ppid=2026 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.020000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:27:05.029000 audit[2095]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.029000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcb2c6f60 a2=0 a3=1 items=0 ppid=2026 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:27:05.034101 kubelet[2026]: E1002 19:27:05.033945 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5ea45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.68 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704682565, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 943948142, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5ea45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.038000 audit[2097]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.040518 kubelet[2026]: E1002 19:27:05.040475 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.038000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff0804030 a2=0 a3=1 items=0 ppid=2026 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.038000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:27:05.047000 audit[2099]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.047000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffff2741c00 a2=0 a3=1 items=0 ppid=2026 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:27:05.055000 audit[2101]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.055000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc8ce1950 a2=0 a3=1 items=0 ppid=2026 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:27:05.065000 audit[2103]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.065000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffc7cb6680 a2=0 a3=1 items=0 ppid=2026 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:27:05.067963 kubelet[2026]: I1002 19:27:05.067927 2026 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:27:05.068087 kubelet[2026]: I1002 19:27:05.068053 2026 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:27:05.068178 kubelet[2026]: I1002 19:27:05.068088 2026 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:27:05.068178 kubelet[2026]: E1002 19:27:05.068160 2026 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:27:05.071166 kubelet[2026]: W1002 19:27:05.071126 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:05.071371 kubelet[2026]: E1002 19:27:05.071348 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:05.071000 audit[2104]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.071000 audit[2104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4ab3f50 a2=0 a3=1 items=0 ppid=2026 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:27:05.075000 audit[2105]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.075000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf614770 a2=0 a3=1 items=0 ppid=2026 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.075000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:27:05.079000 audit[2106]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:05.079000 audit[2106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc21461a0 a2=0 a3=1 items=0 ppid=2026 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:05.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:27:05.140858 kubelet[2026]: E1002 19:27:05.140691 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.241555 kubelet[2026]: E1002 19:27:05.241513 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.248479 kubelet[2026]: E1002 19:27:05.248429 2026 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.31.27.68" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:05.341994 kubelet[2026]: E1002 19:27:05.341929 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.348694 kubelet[2026]: I1002 19:27:05.348644 2026 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.68" Oct 2 19:27:05.352003 kubelet[2026]: E1002 19:27:05.351962 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.68" Oct 2 19:27:05.352191 kubelet[2026]: E1002 19:27:05.352064 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff577cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.68 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704653261, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 348599136, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff577cd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.434476 kubelet[2026]: E1002 19:27:05.434334 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5bc49", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.68 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704670793, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 348606960, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5bc49" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.443234 kubelet[2026]: E1002 19:27:05.443172 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.544007 kubelet[2026]: E1002 19:27:05.543961 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.623650 kubelet[2026]: E1002 19:27:05.623596 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:05.628246 kubelet[2026]: W1002 19:27:05.628197 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.27.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:05.628246 kubelet[2026]: E1002 19:27:05.628246 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:05.633805 kubelet[2026]: E1002 19:27:05.633650 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5ea45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.68 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704682565, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 5, 348611832, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5ea45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:05.644337 kubelet[2026]: E1002 19:27:05.644291 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.744811 kubelet[2026]: E1002 19:27:05.744643 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.845594 kubelet[2026]: E1002 19:27:05.845512 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.887250 kubelet[2026]: W1002 19:27:05.887214 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:05.887445 kubelet[2026]: E1002 19:27:05.887424 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:05.945673 kubelet[2026]: E1002 19:27:05.945633 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:05.979788 kubelet[2026]: W1002 19:27:05.979754 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:05.979968 kubelet[2026]: E1002 19:27:05.979948 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:06.046974 kubelet[2026]: E1002 19:27:06.046267 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.053383 kubelet[2026]: E1002 19:27:06.053334 2026 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.31.27.68" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:06.147919 kubelet[2026]: E1002 19:27:06.147861 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.152896 kubelet[2026]: I1002 19:27:06.152846 2026 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.68" Oct 2 19:27:06.154722 kubelet[2026]: E1002 19:27:06.154649 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.68" Oct 2 19:27:06.155175 kubelet[2026]: E1002 19:27:06.155049 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff577cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.68 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704653261, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 6, 152801484, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff577cd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:06.156838 kubelet[2026]: E1002 19:27:06.156658 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5bc49", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.68 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704670793, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 6, 152809488, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5bc49" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:06.234677 kubelet[2026]: E1002 19:27:06.234557 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5ea45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.68 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704682565, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 6, 152815464, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5ea45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:06.248006 kubelet[2026]: E1002 19:27:06.247954 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.349510 kubelet[2026]: E1002 19:27:06.348455 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.449363 kubelet[2026]: E1002 19:27:06.449327 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.503137 kubelet[2026]: W1002 19:27:06.503096 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:06.503137 kubelet[2026]: E1002 19:27:06.503143 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:06.550565 kubelet[2026]: E1002 19:27:06.550529 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.624616 kubelet[2026]: E1002 19:27:06.623947 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:06.651159 kubelet[2026]: E1002 19:27:06.651108 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.751793 kubelet[2026]: E1002 19:27:06.751740 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.852647 kubelet[2026]: E1002 19:27:06.852590 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:06.953203 kubelet[2026]: E1002 19:27:06.953167 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.053781 kubelet[2026]: E1002 19:27:07.053732 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.154216 kubelet[2026]: E1002 19:27:07.154160 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.255174 kubelet[2026]: E1002 19:27:07.254700 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.356429 kubelet[2026]: E1002 19:27:07.356385 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.457155 kubelet[2026]: E1002 19:27:07.457112 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.558248 kubelet[2026]: E1002 19:27:07.557805 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.624379 kubelet[2026]: E1002 19:27:07.624325 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:07.655857 kubelet[2026]: E1002 19:27:07.655778 2026 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.31.27.68" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:07.659044 kubelet[2026]: E1002 19:27:07.659014 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.756465 kubelet[2026]: I1002 19:27:07.756414 2026 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.68" Oct 2 19:27:07.758138 kubelet[2026]: E1002 19:27:07.758009 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff577cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.68 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704653261, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 7, 756355840, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff577cd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:07.758520 kubelet[2026]: E1002 19:27:07.758487 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.68" Oct 2 19:27:07.759521 kubelet[2026]: E1002 19:27:07.759398 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5bc49", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.68 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704670793, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 7, 756376720, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5bc49" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:07.759982 kubelet[2026]: E1002 19:27:07.759911 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.760928 kubelet[2026]: E1002 19:27:07.760812 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5ea45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.68 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704682565, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 7, 756382264, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5ea45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:07.861778 kubelet[2026]: E1002 19:27:07.860774 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.961661 kubelet[2026]: E1002 19:27:07.961617 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:07.983378 kubelet[2026]: W1002 19:27:07.983341 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:07.983609 kubelet[2026]: E1002 19:27:07.983587 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:08.061998 kubelet[2026]: E1002 19:27:08.061952 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.162476 kubelet[2026]: E1002 19:27:08.162431 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.191402 kubelet[2026]: W1002 19:27:08.191343 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:08.191778 kubelet[2026]: E1002 19:27:08.191662 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:08.263236 kubelet[2026]: E1002 19:27:08.263167 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.364069 kubelet[2026]: E1002 19:27:08.364012 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.464984 kubelet[2026]: E1002 19:27:08.464842 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.565621 kubelet[2026]: E1002 19:27:08.565571 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.625174 kubelet[2026]: E1002 19:27:08.625128 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:08.666565 kubelet[2026]: E1002 19:27:08.666496 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.697597 kubelet[2026]: W1002 19:27:08.697556 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:08.697882 kubelet[2026]: E1002 19:27:08.697859 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:08.703089 kubelet[2026]: W1002 19:27:08.703033 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.27.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:08.703089 kubelet[2026]: E1002 19:27:08.703086 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:08.766760 kubelet[2026]: E1002 19:27:08.766589 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.867670 kubelet[2026]: E1002 19:27:08.867607 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:08.968475 kubelet[2026]: E1002 19:27:08.968420 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.069363 kubelet[2026]: E1002 19:27:09.069215 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.169600 kubelet[2026]: E1002 19:27:09.169541 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.270385 kubelet[2026]: E1002 19:27:09.270330 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.371302 kubelet[2026]: E1002 19:27:09.371161 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.472089 kubelet[2026]: E1002 19:27:09.472027 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.572909 kubelet[2026]: E1002 19:27:09.572848 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.625525 kubelet[2026]: E1002 19:27:09.625390 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:09.673746 kubelet[2026]: E1002 19:27:09.673676 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.766938 kubelet[2026]: E1002 19:27:09.766886 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:09.773993 kubelet[2026]: E1002 19:27:09.773949 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.874803 kubelet[2026]: E1002 19:27:09.874745 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:09.975623 kubelet[2026]: E1002 19:27:09.975561 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.076592 kubelet[2026]: E1002 19:27:10.076530 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.177155 kubelet[2026]: E1002 19:27:10.177091 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.277948 kubelet[2026]: E1002 19:27:10.277809 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.378752 kubelet[2026]: E1002 19:27:10.378676 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.479574 kubelet[2026]: E1002 19:27:10.479519 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.580477 kubelet[2026]: E1002 19:27:10.580327 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.625817 kubelet[2026]: E1002 19:27:10.625765 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:10.681343 kubelet[2026]: E1002 19:27:10.681284 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.782240 kubelet[2026]: E1002 19:27:10.782186 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.858414 kubelet[2026]: E1002 19:27:10.858271 2026 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.31.27.68" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:27:10.882842 kubelet[2026]: E1002 19:27:10.882782 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:10.960908 kubelet[2026]: I1002 19:27:10.960444 2026 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.68" Oct 2 19:27:10.962263 kubelet[2026]: E1002 19:27:10.962149 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff577cd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.27.68 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704653261, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 10, 960380132, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff577cd" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:10.962629 kubelet[2026]: E1002 19:27:10.962591 2026 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.27.68" Oct 2 19:27:10.963789 kubelet[2026]: E1002 19:27:10.963407 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5bc49", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.27.68 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704670793, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 10, 960407528, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5bc49" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:10.964765 kubelet[2026]: E1002 19:27:10.964631 2026 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.27.68.178a60f73ff5ea45", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.27.68", UID:"172.31.27.68", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.27.68 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.27.68"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 27, 4, 704682565, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 27, 10, 960413012, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.31.27.68.178a60f73ff5ea45" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:27:10.983109 kubelet[2026]: E1002 19:27:10.983083 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.084295 kubelet[2026]: E1002 19:27:11.084242 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.185179 kubelet[2026]: E1002 19:27:11.185146 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.286010 kubelet[2026]: E1002 19:27:11.285949 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.386715 kubelet[2026]: E1002 19:27:11.386660 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.487550 kubelet[2026]: E1002 19:27:11.487412 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.588224 kubelet[2026]: E1002 19:27:11.588164 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.626641 kubelet[2026]: E1002 19:27:11.626595 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:11.688731 kubelet[2026]: E1002 19:27:11.688664 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.789799 kubelet[2026]: E1002 19:27:11.789670 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.890526 kubelet[2026]: E1002 19:27:11.890468 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:11.991221 kubelet[2026]: E1002 19:27:11.991184 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.092120 kubelet[2026]: E1002 19:27:12.091984 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.127801 kubelet[2026]: W1002 19:27:12.127686 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:12.127932 kubelet[2026]: E1002 19:27:12.127812 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:27:12.192563 kubelet[2026]: E1002 19:27:12.192506 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.293262 kubelet[2026]: E1002 19:27:12.293219 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.394110 kubelet[2026]: E1002 19:27:12.393969 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.494826 kubelet[2026]: E1002 19:27:12.494766 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.595541 kubelet[2026]: E1002 19:27:12.595498 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.626973 kubelet[2026]: E1002 19:27:12.626946 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:12.668129 kubelet[2026]: W1002 19:27:12.668092 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.31.27.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:12.668344 kubelet[2026]: E1002 19:27:12.668310 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.27.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:27:12.696467 kubelet[2026]: E1002 19:27:12.696405 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.797365 kubelet[2026]: E1002 19:27:12.797316 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.898113 kubelet[2026]: E1002 19:27:12.898076 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:12.998954 kubelet[2026]: E1002 19:27:12.998812 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.099426 kubelet[2026]: E1002 19:27:13.099392 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.184675 kubelet[2026]: W1002 19:27:13.184632 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:13.184850 kubelet[2026]: E1002 19:27:13.184683 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:27:13.199969 kubelet[2026]: E1002 19:27:13.199940 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.300865 kubelet[2026]: E1002 19:27:13.300743 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.401581 kubelet[2026]: E1002 19:27:13.401541 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.502366 kubelet[2026]: E1002 19:27:13.502307 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.603279 kubelet[2026]: E1002 19:27:13.603139 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.627649 kubelet[2026]: E1002 19:27:13.627590 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:13.703907 kubelet[2026]: E1002 19:27:13.703865 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.804984 kubelet[2026]: E1002 19:27:13.804947 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.906256 kubelet[2026]: E1002 19:27:13.906145 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:13.913087 kubelet[2026]: W1002 19:27:13.913056 2026 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:13.913297 kubelet[2026]: E1002 19:27:13.913276 2026 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:27:14.007111 kubelet[2026]: E1002 19:27:14.007070 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.107654 kubelet[2026]: E1002 19:27:14.107601 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.208407 kubelet[2026]: E1002 19:27:14.208362 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.309119 kubelet[2026]: E1002 19:27:14.309087 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.409855 kubelet[2026]: E1002 19:27:14.409793 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.510692 kubelet[2026]: E1002 19:27:14.510567 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.604147 kubelet[2026]: I1002 19:27:14.604099 2026 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:27:14.611624 kubelet[2026]: E1002 19:27:14.611575 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.627911 kubelet[2026]: E1002 19:27:14.627870 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:14.712214 kubelet[2026]: E1002 19:27:14.712180 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.768882 kubelet[2026]: E1002 19:27:14.768737 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:14.769508 kubelet[2026]: E1002 19:27:14.769484 2026 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.27.68\" not found" Oct 2 19:27:14.813094 kubelet[2026]: E1002 19:27:14.813062 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:14.914364 kubelet[2026]: E1002 19:27:14.914314 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.015260 kubelet[2026]: E1002 19:27:15.015230 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.016801 kubelet[2026]: E1002 19:27:15.016773 2026 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.27.68" not found Oct 2 19:27:15.116329 kubelet[2026]: E1002 19:27:15.115510 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.216493 kubelet[2026]: E1002 19:27:15.216439 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.317509 kubelet[2026]: E1002 19:27:15.317474 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.418539 kubelet[2026]: E1002 19:27:15.418496 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.519351 kubelet[2026]: E1002 19:27:15.519289 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.620210 kubelet[2026]: E1002 19:27:15.620148 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.628424 kubelet[2026]: E1002 19:27:15.628399 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:15.720907 kubelet[2026]: E1002 19:27:15.720754 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.821627 kubelet[2026]: E1002 19:27:15.821579 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:15.922313 kubelet[2026]: E1002 19:27:15.922232 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.023220 kubelet[2026]: E1002 19:27:16.023061 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.049511 kubelet[2026]: E1002 19:27:16.049463 2026 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.27.68" not found Oct 2 19:27:16.123639 kubelet[2026]: E1002 19:27:16.123608 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.224562 kubelet[2026]: E1002 19:27:16.224500 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.325324 kubelet[2026]: E1002 19:27:16.325197 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.426040 kubelet[2026]: E1002 19:27:16.425980 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.526343 kubelet[2026]: E1002 19:27:16.526294 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.627441 kubelet[2026]: E1002 19:27:16.627292 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.629561 kubelet[2026]: E1002 19:27:16.629516 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:16.727860 kubelet[2026]: E1002 19:27:16.727811 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.828662 kubelet[2026]: E1002 19:27:16.828614 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:16.930148 kubelet[2026]: E1002 19:27:16.930102 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.031026 kubelet[2026]: E1002 19:27:17.030964 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.131994 kubelet[2026]: E1002 19:27:17.131953 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.232919 kubelet[2026]: E1002 19:27:17.232780 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.265863 kubelet[2026]: E1002 19:27:17.265805 2026 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.27.68\" not found" node="172.31.27.68" Oct 2 19:27:17.333145 kubelet[2026]: E1002 19:27:17.333115 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.364897 kubelet[2026]: I1002 19:27:17.364864 2026 kubelet_node_status.go:70] "Attempting to register node" node="172.31.27.68" Oct 2 19:27:17.433628 kubelet[2026]: E1002 19:27:17.433569 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.453197 kubelet[2026]: I1002 19:27:17.453146 2026 kubelet_node_status.go:73] "Successfully registered node" node="172.31.27.68" Oct 2 19:27:17.534854 kubelet[2026]: E1002 19:27:17.534690 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.561133 sudo[1822]: pam_unix(sudo:session): session closed for user root Oct 2 19:27:17.563982 kernel: kauditd_printk_skb: 540 callbacks suppressed Oct 2 19:27:17.564048 kernel: audit: type=1106 audit(1696274837.559:638): pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.559000 audit[1822]: USER_END pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.559000 audit[1822]: CRED_DISP pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.581024 kernel: audit: type=1104 audit(1696274837.559:639): pid=1822 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.584816 sshd[1819]: pam_unix(sshd:session): session closed for user core Oct 2 19:27:17.587000 audit[1819]: USER_END pid=1819 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:17.592345 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:27:17.595066 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:27:17.597637 systemd[1]: sshd@6-172.31.27.68:22-139.178.89.65:45154.service: Deactivated successfully. Oct 2 19:27:17.599330 systemd-logind[1552]: Removed session 7. Oct 2 19:27:17.587000 audit[1819]: CRED_DISP pid=1819 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:17.609313 kernel: audit: type=1106 audit(1696274837.587:640): pid=1819 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:17.609400 kernel: audit: type=1104 audit(1696274837.587:641): pid=1819 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:27:17.609466 kernel: audit: type=1131 audit(1696274837.596:642): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.27.68:22-139.178.89.65:45154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.27.68:22-139.178.89.65:45154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:17.629930 kubelet[2026]: E1002 19:27:17.629872 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:17.636175 kubelet[2026]: E1002 19:27:17.636140 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.736445 kubelet[2026]: E1002 19:27:17.736395 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.838371 kubelet[2026]: E1002 19:27:17.837201 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:17.938806 kubelet[2026]: E1002 19:27:17.938766 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.039470 kubelet[2026]: E1002 19:27:18.039385 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.081049 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:27:18.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:18.090745 kernel: audit: type=1131 audit(1696274838.079:643): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:27:18.116000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:27:18.116000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:27:18.123095 kernel: audit: type=1334 audit(1696274838.116:644): prog-id=75 op=UNLOAD Oct 2 19:27:18.123173 kernel: audit: type=1334 audit(1696274838.116:645): prog-id=74 op=UNLOAD Oct 2 19:27:18.123216 kernel: audit: type=1334 audit(1696274838.116:646): prog-id=73 op=UNLOAD Oct 2 19:27:18.116000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:27:18.140391 kubelet[2026]: E1002 19:27:18.140345 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.240950 kubelet[2026]: E1002 19:27:18.240906 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.341792 kubelet[2026]: E1002 19:27:18.341751 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.442647 kubelet[2026]: E1002 19:27:18.442591 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.543501 kubelet[2026]: E1002 19:27:18.543446 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.631119 kubelet[2026]: E1002 19:27:18.631076 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:18.643868 kubelet[2026]: E1002 19:27:18.643816 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.745017 kubelet[2026]: E1002 19:27:18.744898 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.845395 kubelet[2026]: E1002 19:27:18.845355 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:18.947081 kubelet[2026]: E1002 19:27:18.947026 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.048012 kubelet[2026]: E1002 19:27:19.047869 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.148781 kubelet[2026]: E1002 19:27:19.148681 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.249691 kubelet[2026]: E1002 19:27:19.249631 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.350591 kubelet[2026]: E1002 19:27:19.350465 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.451287 kubelet[2026]: E1002 19:27:19.451224 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.551953 kubelet[2026]: E1002 19:27:19.551897 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.631663 kubelet[2026]: E1002 19:27:19.631535 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:19.652031 kubelet[2026]: E1002 19:27:19.651981 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.752533 kubelet[2026]: E1002 19:27:19.752481 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.770362 kubelet[2026]: E1002 19:27:19.770313 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:19.853425 kubelet[2026]: E1002 19:27:19.853380 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:19.954043 kubelet[2026]: E1002 19:27:19.954000 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:20.054801 kubelet[2026]: E1002 19:27:20.054760 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:20.155594 kubelet[2026]: E1002 19:27:20.155538 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:20.257394 kubelet[2026]: E1002 19:27:20.256368 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:20.357218 kubelet[2026]: E1002 19:27:20.357160 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:20.458091 kubelet[2026]: E1002 19:27:20.458033 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:20.558952 kubelet[2026]: E1002 19:27:20.558821 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:20.632525 kubelet[2026]: E1002 19:27:20.632468 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:20.659285 kubelet[2026]: E1002 19:27:20.659233 2026 kubelet.go:2448] "Error getting node" err="node \"172.31.27.68\" not found" Oct 2 19:27:20.760025 kubelet[2026]: I1002 19:27:20.759967 2026 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:27:20.760689 env[1559]: time="2023-10-02T19:27:20.760548079Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:27:20.761456 kubelet[2026]: I1002 19:27:20.761428 2026 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:27:20.762313 kubelet[2026]: E1002 19:27:20.762286 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:21.631093 kubelet[2026]: I1002 19:27:21.631044 2026 apiserver.go:52] "Watching apiserver" Oct 2 19:27:21.633259 kubelet[2026]: E1002 19:27:21.633223 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:21.635492 kubelet[2026]: I1002 19:27:21.635435 2026 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:27:21.635628 kubelet[2026]: I1002 19:27:21.635540 2026 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:27:21.646412 systemd[1]: Created slice kubepods-burstable-pod4eb36433_a779_48f6_9031_b14f9bb6926c.slice. Oct 2 19:27:21.653602 kubelet[2026]: I1002 19:27:21.653566 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-etc-cni-netd\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.653846 kubelet[2026]: I1002 19:27:21.653820 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eb36433-a779-48f6-9031-b14f9bb6926c-clustermesh-secrets\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.653993 kubelet[2026]: I1002 19:27:21.653972 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-config-path\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.654135 kubelet[2026]: I1002 19:27:21.654114 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0051784b-d5cf-41cd-b8c3-2cda565c1795-xtables-lock\") pod \"kube-proxy-ksxbb\" (UID: \"0051784b-d5cf-41cd-b8c3-2cda565c1795\") " pod="kube-system/kube-proxy-ksxbb" Oct 2 19:27:21.654278 kubelet[2026]: I1002 19:27:21.654257 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-bpf-maps\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.654446 kubelet[2026]: I1002 19:27:21.654425 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cni-path\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.654586 kubelet[2026]: I1002 19:27:21.654565 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-xtables-lock\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.654753 kubelet[2026]: I1002 19:27:21.654721 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-host-proc-sys-net\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.654890 kubelet[2026]: I1002 19:27:21.654869 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eb36433-a779-48f6-9031-b14f9bb6926c-hubble-tls\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.655025 kubelet[2026]: I1002 19:27:21.655005 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0051784b-d5cf-41cd-b8c3-2cda565c1795-kube-proxy\") pod \"kube-proxy-ksxbb\" (UID: \"0051784b-d5cf-41cd-b8c3-2cda565c1795\") " pod="kube-system/kube-proxy-ksxbb" Oct 2 19:27:21.655177 kubelet[2026]: I1002 19:27:21.655156 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-cgroup\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.655312 kubelet[2026]: I1002 19:27:21.655292 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-lib-modules\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.655461 kubelet[2026]: I1002 19:27:21.655441 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-host-proc-sys-kernel\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.655598 kubelet[2026]: I1002 19:27:21.655578 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrgm\" (UniqueName: \"kubernetes.io/projected/4eb36433-a779-48f6-9031-b14f9bb6926c-kube-api-access-jvrgm\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.655753 kubelet[2026]: I1002 19:27:21.655733 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0051784b-d5cf-41cd-b8c3-2cda565c1795-lib-modules\") pod \"kube-proxy-ksxbb\" (UID: \"0051784b-d5cf-41cd-b8c3-2cda565c1795\") " pod="kube-system/kube-proxy-ksxbb" Oct 2 19:27:21.655906 kubelet[2026]: I1002 19:27:21.655885 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-run\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.656044 kubelet[2026]: I1002 19:27:21.656021 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-hostproc\") pod \"cilium-w4m2l\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " pod="kube-system/cilium-w4m2l" Oct 2 19:27:21.656184 kubelet[2026]: I1002 19:27:21.656164 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hltqb\" (UniqueName: \"kubernetes.io/projected/0051784b-d5cf-41cd-b8c3-2cda565c1795-kube-api-access-hltqb\") pod \"kube-proxy-ksxbb\" (UID: \"0051784b-d5cf-41cd-b8c3-2cda565c1795\") " pod="kube-system/kube-proxy-ksxbb" Oct 2 19:27:21.656293 kubelet[2026]: I1002 19:27:21.656272 2026 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:27:21.668336 systemd[1]: Created slice kubepods-besteffort-pod0051784b_d5cf_41cd_b8c3_2cda565c1795.slice. Oct 2 19:27:21.981207 env[1559]: time="2023-10-02T19:27:21.981154736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ksxbb,Uid:0051784b-d5cf-41cd-b8c3-2cda565c1795,Namespace:kube-system,Attempt:0,}" Oct 2 19:27:22.265579 env[1559]: time="2023-10-02T19:27:22.265145006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4m2l,Uid:4eb36433-a779-48f6-9031-b14f9bb6926c,Namespace:kube-system,Attempt:0,}" Oct 2 19:27:22.634410 kubelet[2026]: E1002 19:27:22.633996 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:22.701393 env[1559]: time="2023-10-02T19:27:22.701334969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.703370 env[1559]: time="2023-10-02T19:27:22.703325846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.712402 env[1559]: time="2023-10-02T19:27:22.712344967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.716274 env[1559]: time="2023-10-02T19:27:22.716208724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.720688 env[1559]: time="2023-10-02T19:27:22.720627065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.725313 env[1559]: time="2023-10-02T19:27:22.725236304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.726699 env[1559]: time="2023-10-02T19:27:22.726626948Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.728429 env[1559]: time="2023-10-02T19:27:22.728378447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:22.770956 env[1559]: time="2023-10-02T19:27:22.770787143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:27:22.771177 env[1559]: time="2023-10-02T19:27:22.770789831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:27:22.771293 env[1559]: time="2023-10-02T19:27:22.770880732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:27:22.771293 env[1559]: time="2023-10-02T19:27:22.770998969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:27:22.771561 env[1559]: time="2023-10-02T19:27:22.771465665Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b pid=2129 runtime=io.containerd.runc.v2 Oct 2 19:27:22.771561 env[1559]: time="2023-10-02T19:27:22.771460121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:27:22.771851 env[1559]: time="2023-10-02T19:27:22.771502121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:27:22.772414 env[1559]: time="2023-10-02T19:27:22.772262363Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b1f435fed41bf9f92682b4a166a736203779b7e2cb095955e5f09fdc7ac38cc pid=2130 runtime=io.containerd.runc.v2 Oct 2 19:27:22.778172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902354879.mount: Deactivated successfully. Oct 2 19:27:22.832864 systemd[1]: Started cri-containerd-0b1f435fed41bf9f92682b4a166a736203779b7e2cb095955e5f09fdc7ac38cc.scope. Oct 2 19:27:22.876079 systemd[1]: Started cri-containerd-fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b.scope. Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.912725 kernel: audit: type=1400 audit(1696274842.893:647): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.912796 kernel: audit: type=1400 audit(1696274842.893:648): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.921303 kernel: audit: type=1400 audit(1696274842.893:649): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.940026 kernel: audit: type=1400 audit(1696274842.893:650): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.940132 kernel: audit: type=1400 audit(1696274842.893:651): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947960 kernel: audit: type=1400 audit(1696274842.893:652): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.964734 kernel: audit: type=1400 audit(1696274842.893:653): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.964872 kernel: audit: type=1400 audit(1696274842.893:654): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977713 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:27:22.977826 kernel: audit: type=1400 audit(1696274842.893:655): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.893000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit: BPF prog-id=76 op=LOAD Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2130 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316634333566656434316266396639323638326234613136366137 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2130 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316634333566656434316266396639323638326234613136366137 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit: BPF prog-id=77 op=LOAD Oct 2 19:27:22.902000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2130 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316634333566656434316266396639323638326234613136366137 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit: BPF prog-id=78 op=LOAD Oct 2 19:27:22.902000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2130 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316634333566656434316266396639323638326234613136366137 Oct 2 19:27:22.902000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:27:22.902000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { perfmon } for pid=2151 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit[2151]: AVC avc: denied { bpf } for pid=2151 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.902000 audit: BPF prog-id=79 op=LOAD Oct 2 19:27:22.902000 audit[2151]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2130 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062316634333566656434316266396639323638326234613136366137 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.947000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.955000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.955000 audit: BPF prog-id=80 op=LOAD Oct 2 19:27:22.974000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.974000 audit[2155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400011d5a0 a2=3c a3=0 items=0 ppid=2129 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393136383366663730373537663564653061313936366636383830 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit: BPF prog-id=81 op=LOAD Oct 2 19:27:22.977000 audit[2155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011d8e0 a2=78 a3=0 items=0 ppid=2129 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393136383366663730373537663564653061313936366636383830 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.977000 audit: BPF prog-id=82 op=LOAD Oct 2 19:27:22.977000 audit[2155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400011d670 a2=78 a3=0 items=0 ppid=2129 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.977000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393136383366663730373537663564653061313936366636383830 Oct 2 19:27:22.978000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:27:22.978000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { perfmon } for pid=2155 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit[2155]: AVC avc: denied { bpf } for pid=2155 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:22.978000 audit: BPF prog-id=83 op=LOAD Oct 2 19:27:22.978000 audit[2155]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011db40 a2=78 a3=0 items=0 ppid=2129 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:22.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393136383366663730373537663564653061313936366636383830 Oct 2 19:27:23.017300 env[1559]: time="2023-10-02T19:27:23.017241231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w4m2l,Uid:4eb36433-a779-48f6-9031-b14f9bb6926c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\"" Oct 2 19:27:23.021733 env[1559]: time="2023-10-02T19:27:23.021645734Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:27:23.032137 env[1559]: time="2023-10-02T19:27:23.032057545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ksxbb,Uid:0051784b-d5cf-41cd-b8c3-2cda565c1795,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b1f435fed41bf9f92682b4a166a736203779b7e2cb095955e5f09fdc7ac38cc\"" Oct 2 19:27:23.635207 kubelet[2026]: E1002 19:27:23.635137 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:23.775141 systemd[1]: run-containerd-runc-k8s.io-fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b-runc.kv6EeE.mount: Deactivated successfully. Oct 2 19:27:24.621589 kubelet[2026]: E1002 19:27:24.621526 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:24.635841 kubelet[2026]: E1002 19:27:24.635771 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:24.772780 kubelet[2026]: E1002 19:27:24.772729 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:25.636149 kubelet[2026]: E1002 19:27:25.636081 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:26.636670 kubelet[2026]: E1002 19:27:26.636598 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:27.637302 kubelet[2026]: E1002 19:27:27.637237 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:28.637595 kubelet[2026]: E1002 19:27:28.637531 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:29.638518 kubelet[2026]: E1002 19:27:29.638450 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:29.773618 kubelet[2026]: E1002 19:27:29.773475 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:30.198970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735344123.mount: Deactivated successfully. Oct 2 19:27:30.639371 kubelet[2026]: E1002 19:27:30.638918 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:31.639862 kubelet[2026]: E1002 19:27:31.639796 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:32.640651 kubelet[2026]: E1002 19:27:32.640582 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:32.864350 update_engine[1553]: I1002 19:27:32.863768 1553 update_attempter.cc:505] Updating boot flags... Oct 2 19:27:33.641614 kubelet[2026]: E1002 19:27:33.641518 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:34.420205 env[1559]: time="2023-10-02T19:27:34.420117351Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:34.423298 env[1559]: time="2023-10-02T19:27:34.423245967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:34.426792 env[1559]: time="2023-10-02T19:27:34.426742781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:34.429485 env[1559]: time="2023-10-02T19:27:34.428501987Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 19:27:34.431118 env[1559]: time="2023-10-02T19:27:34.431063566Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:27:34.433173 env[1559]: time="2023-10-02T19:27:34.433117050Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:27:34.456815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84649258.mount: Deactivated successfully. Oct 2 19:27:34.465136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110106279.mount: Deactivated successfully. Oct 2 19:27:34.476332 env[1559]: time="2023-10-02T19:27:34.476234190Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\"" Oct 2 19:27:34.477769 env[1559]: time="2023-10-02T19:27:34.477690132Z" level=info msg="StartContainer for \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\"" Oct 2 19:27:34.525183 systemd[1]: Started cri-containerd-73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c.scope. Oct 2 19:27:34.561636 systemd[1]: cri-containerd-73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c.scope: Deactivated successfully. Oct 2 19:27:34.641741 kubelet[2026]: E1002 19:27:34.641641 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:34.776306 kubelet[2026]: E1002 19:27:34.775034 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:35.451372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c-rootfs.mount: Deactivated successfully. Oct 2 19:27:35.642789 kubelet[2026]: E1002 19:27:35.642743 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:36.146010 env[1559]: time="2023-10-02T19:27:36.145891880Z" level=info msg="shim disconnected" id=73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c Oct 2 19:27:36.146623 env[1559]: time="2023-10-02T19:27:36.146582698Z" level=warning msg="cleaning up after shim disconnected" id=73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c namespace=k8s.io Oct 2 19:27:36.146834 env[1559]: time="2023-10-02T19:27:36.146805251Z" level=info msg="cleaning up dead shim" Oct 2 19:27:36.178013 env[1559]: time="2023-10-02T19:27:36.177947410Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2407 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:36.178763 env[1559]: time="2023-10-02T19:27:36.178584456Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:27:36.179379 env[1559]: time="2023-10-02T19:27:36.179327463Z" level=error msg="Failed to pipe stderr of container \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\"" error="reading from a closed fifo" Oct 2 19:27:36.179921 env[1559]: time="2023-10-02T19:27:36.179530155Z" level=error msg="Failed to pipe stdout of container \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\"" error="reading from a closed fifo" Oct 2 19:27:36.187096 env[1559]: time="2023-10-02T19:27:36.187009577Z" level=error msg="StartContainer for \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:36.188409 kubelet[2026]: E1002 19:27:36.187602 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c" Oct 2 19:27:36.188409 kubelet[2026]: E1002 19:27:36.187843 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:36.188409 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:36.188409 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:27:36.188910 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jvrgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:36.189062 kubelet[2026]: E1002 19:27:36.187932 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:27:36.644440 kubelet[2026]: E1002 19:27:36.644331 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:36.792811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736866833.mount: Deactivated successfully. Oct 2 19:27:37.140080 env[1559]: time="2023-10-02T19:27:37.140004408Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:27:37.165670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191248042.mount: Deactivated successfully. Oct 2 19:27:37.179353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004315167.mount: Deactivated successfully. Oct 2 19:27:37.188785 env[1559]: time="2023-10-02T19:27:37.188664757Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\"" Oct 2 19:27:37.190305 env[1559]: time="2023-10-02T19:27:37.190228362Z" level=info msg="StartContainer for \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\"" Oct 2 19:27:37.248824 systemd[1]: Started cri-containerd-88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb.scope. Oct 2 19:27:37.288816 systemd[1]: cri-containerd-88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb.scope: Deactivated successfully. Oct 2 19:27:37.627329 env[1559]: time="2023-10-02T19:27:37.627215604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:37.631523 env[1559]: time="2023-10-02T19:27:37.631455506Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:37.634718 env[1559]: time="2023-10-02T19:27:37.634641492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:37.637834 env[1559]: time="2023-10-02T19:27:37.637785610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:27:37.638227 env[1559]: time="2023-10-02T19:27:37.638170188Z" level=info msg="shim disconnected" id=88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb Oct 2 19:27:37.638404 env[1559]: time="2023-10-02T19:27:37.638369196Z" level=warning msg="cleaning up after shim disconnected" id=88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb namespace=k8s.io Oct 2 19:27:37.638567 env[1559]: time="2023-10-02T19:27:37.638538361Z" level=info msg="cleaning up dead shim" Oct 2 19:27:37.639101 env[1559]: time="2023-10-02T19:27:37.639046190Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 19:27:37.642564 env[1559]: time="2023-10-02T19:27:37.642474721Z" level=info msg="CreateContainer within sandbox \"0b1f435fed41bf9f92682b4a166a736203779b7e2cb095955e5f09fdc7ac38cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:27:37.644875 kubelet[2026]: E1002 19:27:37.644759 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:37.674825 env[1559]: time="2023-10-02T19:27:37.674748418Z" level=info msg="CreateContainer within sandbox \"0b1f435fed41bf9f92682b4a166a736203779b7e2cb095955e5f09fdc7ac38cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70309546f0cc7b8f7e899f0311b54531bb5e4ad9aaf7756a3d340aca49cac569\"" Oct 2 19:27:37.675923 env[1559]: time="2023-10-02T19:27:37.675870145Z" level=info msg="StartContainer for \"70309546f0cc7b8f7e899f0311b54531bb5e4ad9aaf7756a3d340aca49cac569\"" Oct 2 19:27:37.681915 env[1559]: time="2023-10-02T19:27:37.681839205Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2445 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:37.682401 env[1559]: time="2023-10-02T19:27:37.682282942Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Oct 2 19:27:37.684953 env[1559]: time="2023-10-02T19:27:37.684835014Z" level=error msg="Failed to pipe stderr of container \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\"" error="reading from a closed fifo" Oct 2 19:27:37.685125 env[1559]: time="2023-10-02T19:27:37.684986203Z" level=error msg="Failed to pipe stdout of container \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\"" error="reading from a closed fifo" Oct 2 19:27:37.688041 env[1559]: time="2023-10-02T19:27:37.687951580Z" level=error msg="StartContainer for \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:37.688578 kubelet[2026]: E1002 19:27:37.688313 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb" Oct 2 19:27:37.688578 kubelet[2026]: E1002 19:27:37.688470 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:37.688578 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:37.688578 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:27:37.688943 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jvrgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:37.689230 kubelet[2026]: E1002 19:27:37.688530 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:27:37.724943 systemd[1]: Started cri-containerd-70309546f0cc7b8f7e899f0311b54531bb5e4ad9aaf7756a3d340aca49cac569.scope. Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.776398 kernel: kauditd_printk_skb: 104 callbacks suppressed Oct 2 19:27:37.776515 kernel: audit: type=1400 audit(1696274857.772:682): avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001b95a0 a2=3c a3=0 items=0 ppid=2130 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.800028 kernel: audit: type=1300 audit(1696274857.772:682): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001b95a0 a2=3c a3=0 items=0 ppid=2130 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730333039353436663063633762386637653839396630333131623534 Oct 2 19:27:37.811738 kernel: audit: type=1327 audit(1696274857.772:682): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730333039353436663063633762386637653839396630333131623534 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.819658 kernel: audit: type=1400 audit(1696274857.772:683): avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.827963 kernel: audit: type=1400 audit(1696274857.772:683): avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.836940 kernel: audit: type=1400 audit(1696274857.772:683): avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.845574 kernel: audit: type=1400 audit(1696274857.772:683): avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.853568 kernel: audit: type=1400 audit(1696274857.772:683): avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.853790 kernel: audit: type=1400 audit(1696274857.772:683): avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.868892 kernel: audit: type=1400 audit(1696274857.772:683): avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.772000 audit: BPF prog-id=84 op=LOAD Oct 2 19:27:37.772000 audit[2467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001b98e0 a2=78 a3=0 items=0 ppid=2130 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730333039353436663063633762386637653839396630333131623534 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.775000 audit: BPF prog-id=85 op=LOAD Oct 2 19:27:37.775000 audit[2467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001b9670 a2=78 a3=0 items=0 ppid=2130 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730333039353436663063633762386637653839396630333131623534 Oct 2 19:27:37.782000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:27:37.782000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { perfmon } for pid=2467 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit[2467]: AVC avc: denied { bpf } for pid=2467 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:27:37.782000 audit: BPF prog-id=86 op=LOAD Oct 2 19:27:37.782000 audit[2467]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001b9b40 a2=78 a3=0 items=0 ppid=2130 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:37.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730333039353436663063633762386637653839396630333131623534 Oct 2 19:27:37.877190 env[1559]: time="2023-10-02T19:27:37.877124343Z" level=info msg="StartContainer for \"70309546f0cc7b8f7e899f0311b54531bb5e4ad9aaf7756a3d340aca49cac569\" returns successfully" Oct 2 19:27:37.965743 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:27:37.965936 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:27:37.967238 kernel: IPVS: ipvs loaded. Oct 2 19:27:37.991778 kernel: IPVS: [rr] scheduler registered. Oct 2 19:27:38.005798 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:27:38.018039 kernel: IPVS: [sh] scheduler registered. Oct 2 19:27:38.115000 audit[2526]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.115000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecb48030 a2=0 a3=ffffa40dc6c0 items=0 ppid=2478 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:27:38.121000 audit[2527]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.121000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffded5a760 a2=0 a3=ffff96dc06c0 items=0 ppid=2478 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:27:38.122000 audit[2528]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.122000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffed3058e0 a2=0 a3=ffffac8a86c0 items=0 ppid=2478 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:27:38.127000 audit[2529]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.127000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc727a7b0 a2=0 a3=ffffa0d686c0 items=0 ppid=2478 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:27:38.129000 audit[2530]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.129000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb4b3220 a2=0 a3=ffff9ab926c0 items=0 ppid=2478 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:27:38.133000 audit[2531]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.133000 audit[2531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffd3bd750 a2=0 a3=ffffa86de6c0 items=0 ppid=2478 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.133000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:27:38.141532 kubelet[2026]: I1002 19:27:38.141491 2026 scope.go:115] "RemoveContainer" containerID="73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c" Oct 2 19:27:38.142225 kubelet[2026]: I1002 19:27:38.142195 2026 scope.go:115] "RemoveContainer" containerID="73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c" Oct 2 19:27:38.145385 env[1559]: time="2023-10-02T19:27:38.145330411Z" level=info msg="RemoveContainer for \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\"" Oct 2 19:27:38.147921 env[1559]: time="2023-10-02T19:27:38.147853779Z" level=info msg="RemoveContainer for \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\"" Oct 2 19:27:38.148130 env[1559]: time="2023-10-02T19:27:38.147999376Z" level=error msg="RemoveContainer for \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\" failed" error="failed to set removing state for container \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\": container is already in removing state" Oct 2 19:27:38.148367 kubelet[2026]: E1002 19:27:38.148315 2026 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\": container is already in removing state" containerID="73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c" Oct 2 19:27:38.148462 kubelet[2026]: E1002 19:27:38.148415 2026 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c": container is already in removing state; Skipping pod "cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)" Oct 2 19:27:38.150497 kubelet[2026]: E1002 19:27:38.150448 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:27:38.151505 env[1559]: time="2023-10-02T19:27:38.151454246Z" level=info msg="RemoveContainer for \"73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c\" returns successfully" Oct 2 19:27:38.228000 audit[2532]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.228000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcc200450 a2=0 a3=ffffb9ff46c0 items=0 ppid=2478 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:27:38.237000 audit[2534]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.237000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff15b27a0 a2=0 a3=ffffa0faf6c0 items=0 ppid=2478 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:27:38.249000 audit[2537]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.249000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffee077d70 a2=0 a3=ffffb89c96c0 items=0 ppid=2478 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:27:38.253000 audit[2538]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.253000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd66a9dd0 a2=0 a3=ffff9360f6c0 items=0 ppid=2478 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:27:38.262000 audit[2540]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.262000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffce9f700 a2=0 a3=ffff8fbdf6c0 items=0 ppid=2478 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:27:38.267000 audit[2541]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.267000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffead55aa0 a2=0 a3=ffffbcacd6c0 items=0 ppid=2478 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:27:38.276000 audit[2543]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=2543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.276000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd2dc2270 a2=0 a3=ffffba4346c0 items=0 ppid=2478 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:27:38.288000 audit[2546]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.288000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffea321ab0 a2=0 a3=ffffad0a06c0 items=0 ppid=2478 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.288000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:27:38.293000 audit[2547]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.293000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecf916d0 a2=0 a3=ffff8cf616c0 items=0 ppid=2478 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.293000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:27:38.301000 audit[2549]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.301000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff4f40d10 a2=0 a3=ffff7fbd46c0 items=0 ppid=2478 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:27:38.305000 audit[2550]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.305000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbb624a0 a2=0 a3=ffff815ef6c0 items=0 ppid=2478 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.305000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:27:38.314000 audit[2552]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.314000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc033d970 a2=0 a3=ffff88ebe6c0 items=0 ppid=2478 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:38.326000 audit[2555]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.326000 audit[2555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc8ea5b70 a2=0 a3=ffffaa03e6c0 items=0 ppid=2478 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:38.338000 audit[2558]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.338000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd54e640 a2=0 a3=ffff835a86c0 items=0 ppid=2478 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:27:38.342000 audit[2559]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.342000 audit[2559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd25be3e0 a2=0 a3=ffff821096c0 items=0 ppid=2478 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:27:38.350000 audit[2561]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.350000 audit[2561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffddac9ba0 a2=0 a3=ffff8bb186c0 items=0 ppid=2478 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.350000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:38.361000 audit[2564]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:27:38.361000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff2e02590 a2=0 a3=ffffa3ac96c0 items=0 ppid=2478 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:38.389000 audit[2568]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:27:38.389000 audit[2568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffdd563e90 a2=0 a3=ffff94e416c0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:38.405000 audit[2568]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:27:38.405000 audit[2568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffdd563e90 a2=0 a3=ffff94e416c0 items=0 ppid=2478 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:38.410000 audit[2572]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.410000 audit[2572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe6f5dfe0 a2=0 a3=ffff97ba66c0 items=0 ppid=2478 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:27:38.419000 audit[2574]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=2574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.419000 audit[2574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffca711560 a2=0 a3=ffffab7e96c0 items=0 ppid=2478 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:27:38.431000 audit[2577]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.431000 audit[2577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd5081c80 a2=0 a3=ffffb7adb6c0 items=0 ppid=2478 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:27:38.435000 audit[2578]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.435000 audit[2578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8bdb490 a2=0 a3=ffffbace36c0 items=0 ppid=2478 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:27:38.443000 audit[2580]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.443000 audit[2580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe2a540c0 a2=0 a3=ffffae6506c0 items=0 ppid=2478 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:27:38.447000 audit[2581]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.447000 audit[2581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe60ef70 a2=0 a3=ffffadeac6c0 items=0 ppid=2478 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.447000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:27:38.455000 audit[2583]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.455000 audit[2583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc2fdcf10 a2=0 a3=ffff92f866c0 items=0 ppid=2478 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:27:38.467000 audit[2586]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.467000 audit[2586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc35654c0 a2=0 a3=ffff8e82d6c0 items=0 ppid=2478 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:27:38.471000 audit[2587]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.471000 audit[2587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd68342f0 a2=0 a3=ffffac5a46c0 items=0 ppid=2478 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:27:38.479000 audit[2589]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.479000 audit[2589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe3225070 a2=0 a3=ffffaee596c0 items=0 ppid=2478 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:27:38.484000 audit[2590]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.484000 audit[2590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc045a1f0 a2=0 a3=ffff8a06e6c0 items=0 ppid=2478 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.484000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:27:38.494000 audit[2593]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.494000 audit[2593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffcf6d4e0 a2=0 a3=ffffa90d06c0 items=0 ppid=2478 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.494000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:27:38.506000 audit[2596]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.506000 audit[2596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc0239720 a2=0 a3=ffff804246c0 items=0 ppid=2478 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.506000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:27:38.518000 audit[2599]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.518000 audit[2599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdc27fcf0 a2=0 a3=ffffa3c7e6c0 items=0 ppid=2478 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.518000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:27:38.522000 audit[2600]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.522000 audit[2600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe3500d70 a2=0 a3=ffffb043d6c0 items=0 ppid=2478 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.522000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:27:38.532000 audit[2602]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.532000 audit[2602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff2438620 a2=0 a3=ffff9b8f36c0 items=0 ppid=2478 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.532000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:38.543000 audit[2605]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:27:38.543000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffec824620 a2=0 a3=ffffb34956c0 items=0 ppid=2478 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.543000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:27:38.561000 audit[2609]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:27:38.561000 audit[2609]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd1e068c0 a2=0 a3=ffffac9826c0 items=0 ppid=2478 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.561000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:38.563000 audit[2609]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:27:38.563000 audit[2609]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=ffffd1e068c0 a2=0 a3=ffffac9826c0 items=0 ppid=2478 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:27:38.563000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:27:38.645264 kubelet[2026]: E1002 19:27:38.645182 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:39.154876 kubelet[2026]: E1002 19:27:39.154332 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:27:39.251629 kubelet[2026]: W1002 19:27:39.251534 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb36433_a779_48f6_9031_b14f9bb6926c.slice/cri-containerd-73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c.scope WatchSource:0}: container "73b8295d7b731ba1fb8a859edb3b0c25dff4e54ceb744879594df3d9be41045c" in namespace "k8s.io": not found Oct 2 19:27:39.645735 kubelet[2026]: E1002 19:27:39.645642 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:39.776206 kubelet[2026]: E1002 19:27:39.776173 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:40.646847 kubelet[2026]: E1002 19:27:40.646781 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:41.647724 kubelet[2026]: E1002 19:27:41.647642 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:42.359748 kubelet[2026]: W1002 19:27:42.359672 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb36433_a779_48f6_9031_b14f9bb6926c.slice/cri-containerd-88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb.scope WatchSource:0}: task 88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb not found: not found Oct 2 19:27:42.648651 kubelet[2026]: E1002 19:27:42.648355 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:43.649002 kubelet[2026]: E1002 19:27:43.648958 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:44.622213 kubelet[2026]: E1002 19:27:44.622146 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:44.650443 kubelet[2026]: E1002 19:27:44.650399 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:44.777202 kubelet[2026]: E1002 19:27:44.777167 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:45.651531 kubelet[2026]: E1002 19:27:45.651462 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:46.651807 kubelet[2026]: E1002 19:27:46.651761 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:47.653302 kubelet[2026]: E1002 19:27:47.653259 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:48.654873 kubelet[2026]: E1002 19:27:48.654802 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:49.655504 kubelet[2026]: E1002 19:27:49.655434 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:49.779022 kubelet[2026]: E1002 19:27:49.778988 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:50.655871 kubelet[2026]: E1002 19:27:50.655799 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:51.656650 kubelet[2026]: E1002 19:27:51.656550 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:52.072758 env[1559]: time="2023-10-02T19:27:52.072530741Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:27:52.089071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211457137.mount: Deactivated successfully. Oct 2 19:27:52.099967 env[1559]: time="2023-10-02T19:27:52.099904755Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\"" Oct 2 19:27:52.101303 env[1559]: time="2023-10-02T19:27:52.101242108Z" level=info msg="StartContainer for \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\"" Oct 2 19:27:52.149528 systemd[1]: Started cri-containerd-e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd.scope. Oct 2 19:27:52.192020 systemd[1]: cri-containerd-e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd.scope: Deactivated successfully. Oct 2 19:27:52.210070 env[1559]: time="2023-10-02T19:27:52.209990118Z" level=info msg="shim disconnected" id=e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd Oct 2 19:27:52.210070 env[1559]: time="2023-10-02T19:27:52.210066402Z" level=warning msg="cleaning up after shim disconnected" id=e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd namespace=k8s.io Oct 2 19:27:52.210419 env[1559]: time="2023-10-02T19:27:52.210088830Z" level=info msg="cleaning up dead shim" Oct 2 19:27:52.236865 env[1559]: time="2023-10-02T19:27:52.236800503Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:27:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2635 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:27:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:27:52.237529 env[1559]: time="2023-10-02T19:27:52.237453184Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Oct 2 19:27:52.241962 env[1559]: time="2023-10-02T19:27:52.241901985Z" level=error msg="Failed to pipe stdout of container \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\"" error="reading from a closed fifo" Oct 2 19:27:52.242234 env[1559]: time="2023-10-02T19:27:52.242155953Z" level=error msg="Failed to pipe stderr of container \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\"" error="reading from a closed fifo" Oct 2 19:27:52.244601 env[1559]: time="2023-10-02T19:27:52.244528260Z" level=error msg="StartContainer for \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:27:52.245089 kubelet[2026]: E1002 19:27:52.245038 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd" Oct 2 19:27:52.245260 kubelet[2026]: E1002 19:27:52.245180 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:27:52.245260 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:27:52.245260 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:27:52.245260 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jvrgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:27:52.245547 kubelet[2026]: E1002 19:27:52.245239 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:27:52.657447 kubelet[2026]: E1002 19:27:52.657393 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:53.084816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd-rootfs.mount: Deactivated successfully. Oct 2 19:27:53.185508 kubelet[2026]: I1002 19:27:53.185444 2026 scope.go:115] "RemoveContainer" containerID="88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb" Oct 2 19:27:53.186080 kubelet[2026]: I1002 19:27:53.186012 2026 scope.go:115] "RemoveContainer" containerID="88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb" Oct 2 19:27:53.188652 env[1559]: time="2023-10-02T19:27:53.188601939Z" level=info msg="RemoveContainer for \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\"" Oct 2 19:27:53.190515 env[1559]: time="2023-10-02T19:27:53.189783100Z" level=info msg="RemoveContainer for \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\"" Oct 2 19:27:53.191084 env[1559]: time="2023-10-02T19:27:53.191023973Z" level=error msg="RemoveContainer for \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\" failed" error="failed to set removing state for container \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\": container is already in removing state" Oct 2 19:27:53.191547 kubelet[2026]: E1002 19:27:53.191489 2026 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\": container is already in removing state" containerID="88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb" Oct 2 19:27:53.191668 kubelet[2026]: I1002 19:27:53.191564 2026 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb} err="rpc error: code = Unknown desc = failed to set removing state for container \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\": container is already in removing state" Oct 2 19:27:53.194331 env[1559]: time="2023-10-02T19:27:53.194269449Z" level=info msg="RemoveContainer for \"88655bf9c62b9e723b5c475ac62f37d9b0aee4d7634adff0ca389257e69e59bb\" returns successfully" Oct 2 19:27:53.195178 kubelet[2026]: E1002 19:27:53.195132 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:27:53.658806 kubelet[2026]: E1002 19:27:53.658725 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:54.659473 kubelet[2026]: E1002 19:27:54.659404 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:54.780513 kubelet[2026]: E1002 19:27:54.780457 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:27:55.316568 kubelet[2026]: W1002 19:27:55.316502 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb36433_a779_48f6_9031_b14f9bb6926c.slice/cri-containerd-e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd.scope WatchSource:0}: task e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd not found: not found Oct 2 19:27:55.659844 kubelet[2026]: E1002 19:27:55.659779 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:56.660306 kubelet[2026]: E1002 19:27:56.660257 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:57.661756 kubelet[2026]: E1002 19:27:57.661638 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:58.662833 kubelet[2026]: E1002 19:27:58.662765 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:59.663295 kubelet[2026]: E1002 19:27:59.663249 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:27:59.781540 kubelet[2026]: E1002 19:27:59.781491 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:00.664575 kubelet[2026]: E1002 19:28:00.664525 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:01.665526 kubelet[2026]: E1002 19:28:01.665479 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:02.666853 kubelet[2026]: E1002 19:28:02.666782 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:03.667632 kubelet[2026]: E1002 19:28:03.667590 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:04.070087 kubelet[2026]: E1002 19:28:04.069665 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:28:04.621915 kubelet[2026]: E1002 19:28:04.621848 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:04.668608 kubelet[2026]: E1002 19:28:04.668545 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:04.783315 kubelet[2026]: E1002 19:28:04.783272 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:05.669149 kubelet[2026]: E1002 19:28:05.669076 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:06.669337 kubelet[2026]: E1002 19:28:06.669266 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:07.670247 kubelet[2026]: E1002 19:28:07.670184 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:08.671183 kubelet[2026]: E1002 19:28:08.671064 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:09.671443 kubelet[2026]: E1002 19:28:09.671399 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:09.784620 kubelet[2026]: E1002 19:28:09.784566 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:10.673046 kubelet[2026]: E1002 19:28:10.673003 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:11.674296 kubelet[2026]: E1002 19:28:11.674223 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:12.674455 kubelet[2026]: E1002 19:28:12.674389 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:13.675119 kubelet[2026]: E1002 19:28:13.674992 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:14.675920 kubelet[2026]: E1002 19:28:14.675846 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:14.786434 kubelet[2026]: E1002 19:28:14.786354 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:15.676581 kubelet[2026]: E1002 19:28:15.676504 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:16.677627 kubelet[2026]: E1002 19:28:16.677570 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:17.678471 kubelet[2026]: E1002 19:28:17.678382 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:18.679368 kubelet[2026]: E1002 19:28:18.679311 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:19.073769 env[1559]: time="2023-10-02T19:28:19.073021898Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:28:19.090363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293770938.mount: Deactivated successfully. Oct 2 19:28:19.101764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286283851.mount: Deactivated successfully. Oct 2 19:28:19.109160 env[1559]: time="2023-10-02T19:28:19.109005538Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\"" Oct 2 19:28:19.110066 env[1559]: time="2023-10-02T19:28:19.109978690Z" level=info msg="StartContainer for \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\"" Oct 2 19:28:19.160499 systemd[1]: Started cri-containerd-d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37.scope. Oct 2 19:28:19.199646 systemd[1]: cri-containerd-d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37.scope: Deactivated successfully. Oct 2 19:28:19.222260 env[1559]: time="2023-10-02T19:28:19.222179182Z" level=info msg="shim disconnected" id=d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37 Oct 2 19:28:19.222804 env[1559]: time="2023-10-02T19:28:19.222755326Z" level=warning msg="cleaning up after shim disconnected" id=d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37 namespace=k8s.io Oct 2 19:28:19.223015 env[1559]: time="2023-10-02T19:28:19.222975322Z" level=info msg="cleaning up dead shim" Oct 2 19:28:19.251810 env[1559]: time="2023-10-02T19:28:19.251743037Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:28:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2676 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:28:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:28:19.252479 env[1559]: time="2023-10-02T19:28:19.252399269Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:28:19.256947 env[1559]: time="2023-10-02T19:28:19.254789633Z" level=error msg="Failed to pipe stderr of container \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\"" error="reading from a closed fifo" Oct 2 19:28:19.257172 env[1559]: time="2023-10-02T19:28:19.256762026Z" level=error msg="Failed to pipe stdout of container \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\"" error="reading from a closed fifo" Oct 2 19:28:19.259541 env[1559]: time="2023-10-02T19:28:19.259477362Z" level=error msg="StartContainer for \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:28:19.260651 kubelet[2026]: E1002 19:28:19.259991 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37" Oct 2 19:28:19.260651 kubelet[2026]: E1002 19:28:19.260145 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:28:19.260651 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:28:19.260651 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:28:19.261035 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jvrgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:28:19.261162 kubelet[2026]: E1002 19:28:19.260208 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:28:19.679666 kubelet[2026]: E1002 19:28:19.679616 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:19.787802 kubelet[2026]: E1002 19:28:19.787752 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:20.085826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37-rootfs.mount: Deactivated successfully. Oct 2 19:28:20.250555 kubelet[2026]: I1002 19:28:20.250522 2026 scope.go:115] "RemoveContainer" containerID="e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd" Oct 2 19:28:20.251263 kubelet[2026]: I1002 19:28:20.251238 2026 scope.go:115] "RemoveContainer" containerID="e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd" Oct 2 19:28:20.253366 env[1559]: time="2023-10-02T19:28:20.253302697Z" level=info msg="RemoveContainer for \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\"" Oct 2 19:28:20.256461 env[1559]: time="2023-10-02T19:28:20.256247198Z" level=info msg="RemoveContainer for \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\"" Oct 2 19:28:20.256745 env[1559]: time="2023-10-02T19:28:20.256577882Z" level=error msg="RemoveContainer for \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\" failed" error="rpc error: code = NotFound desc = get container info: container \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\" in namespace \"k8s.io\": not found" Oct 2 19:28:20.257039 kubelet[2026]: E1002 19:28:20.257000 2026 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\" in namespace \"k8s.io\": not found" containerID="e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd" Oct 2 19:28:20.257144 kubelet[2026]: E1002 19:28:20.257063 2026 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd" in namespace "k8s.io": not found; Skipping pod "cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)" Oct 2 19:28:20.257507 kubelet[2026]: E1002 19:28:20.257461 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:28:20.259529 env[1559]: time="2023-10-02T19:28:20.259453418Z" level=info msg="RemoveContainer for \"e5741d7e6e0a831cd69a6e3f573261009395b3fc76df2c2a1d0d2f47d9b361fd\" returns successfully" Oct 2 19:28:20.679848 kubelet[2026]: E1002 19:28:20.679781 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:21.680378 kubelet[2026]: E1002 19:28:21.680324 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:22.328957 kubelet[2026]: W1002 19:28:22.328888 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb36433_a779_48f6_9031_b14f9bb6926c.slice/cri-containerd-d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37.scope WatchSource:0}: task d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37 not found: not found Oct 2 19:28:22.681858 kubelet[2026]: E1002 19:28:22.681789 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:23.682537 kubelet[2026]: E1002 19:28:23.682465 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:24.621654 kubelet[2026]: E1002 19:28:24.621609 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:24.683255 kubelet[2026]: E1002 19:28:24.683186 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:24.789749 kubelet[2026]: E1002 19:28:24.789696 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:25.683565 kubelet[2026]: E1002 19:28:25.683523 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:26.685067 kubelet[2026]: E1002 19:28:26.685023 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:27.686765 kubelet[2026]: E1002 19:28:27.686665 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:28.687988 kubelet[2026]: E1002 19:28:28.687944 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:29.689467 kubelet[2026]: E1002 19:28:29.689421 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:29.791828 kubelet[2026]: E1002 19:28:29.791750 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:30.690536 kubelet[2026]: E1002 19:28:30.690463 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:31.691692 kubelet[2026]: E1002 19:28:31.691620 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:32.691977 kubelet[2026]: E1002 19:28:32.691932 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:33.693145 kubelet[2026]: E1002 19:28:33.693100 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:34.069739 kubelet[2026]: E1002 19:28:34.069445 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:28:34.694832 kubelet[2026]: E1002 19:28:34.694768 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:34.792641 kubelet[2026]: E1002 19:28:34.792560 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:35.695170 kubelet[2026]: E1002 19:28:35.695101 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:36.696084 kubelet[2026]: E1002 19:28:36.696035 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:37.697748 kubelet[2026]: E1002 19:28:37.697639 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:38.698383 kubelet[2026]: E1002 19:28:38.698314 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:39.699234 kubelet[2026]: E1002 19:28:39.699158 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:39.793464 kubelet[2026]: E1002 19:28:39.793411 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:40.700176 kubelet[2026]: E1002 19:28:40.700102 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:41.700963 kubelet[2026]: E1002 19:28:41.700917 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:42.701998 kubelet[2026]: E1002 19:28:42.701927 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:43.703150 kubelet[2026]: E1002 19:28:43.703077 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.621766 kubelet[2026]: E1002 19:28:44.621691 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.703320 kubelet[2026]: E1002 19:28:44.703254 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.794397 kubelet[2026]: E1002 19:28:44.794346 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:45.703757 kubelet[2026]: E1002 19:28:45.703657 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:46.070053 kubelet[2026]: E1002 19:28:46.069786 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:28:46.704547 kubelet[2026]: E1002 19:28:46.704495 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:47.706315 kubelet[2026]: E1002 19:28:47.706266 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:48.707309 kubelet[2026]: E1002 19:28:48.707247 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:49.708413 kubelet[2026]: E1002 19:28:49.708342 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:49.795763 kubelet[2026]: E1002 19:28:49.795669 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:50.708689 kubelet[2026]: E1002 19:28:50.708643 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:51.709729 kubelet[2026]: E1002 19:28:51.709654 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:52.710832 kubelet[2026]: E1002 19:28:52.710786 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:53.712523 kubelet[2026]: E1002 19:28:53.712455 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:54.713010 kubelet[2026]: E1002 19:28:54.712934 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:54.796550 kubelet[2026]: E1002 19:28:54.796517 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:55.713441 kubelet[2026]: E1002 19:28:55.713376 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:56.714256 kubelet[2026]: E1002 19:28:56.714185 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:57.715195 kubelet[2026]: E1002 19:28:57.715126 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:58.716255 kubelet[2026]: E1002 19:28:58.716213 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:59.069650 kubelet[2026]: E1002 19:28:59.069383 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:28:59.717339 kubelet[2026]: E1002 19:28:59.717292 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:59.798175 kubelet[2026]: E1002 19:28:59.798135 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:00.718867 kubelet[2026]: E1002 19:29:00.718821 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:01.719845 kubelet[2026]: E1002 19:29:01.719777 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:02.720377 kubelet[2026]: E1002 19:29:02.720330 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:03.721498 kubelet[2026]: E1002 19:29:03.721427 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.622162 kubelet[2026]: E1002 19:29:04.622119 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.722029 kubelet[2026]: E1002 19:29:04.721987 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.799821 kubelet[2026]: E1002 19:29:04.799788 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:05.723272 kubelet[2026]: E1002 19:29:05.723224 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:06.724668 kubelet[2026]: E1002 19:29:06.724602 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:07.725106 kubelet[2026]: E1002 19:29:07.725044 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:08.725923 kubelet[2026]: E1002 19:29:08.725853 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:09.726610 kubelet[2026]: E1002 19:29:09.726537 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:09.801872 kubelet[2026]: E1002 19:29:09.801838 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:10.727634 kubelet[2026]: E1002 19:29:10.727568 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:11.727831 kubelet[2026]: E1002 19:29:11.727782 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:12.729129 kubelet[2026]: E1002 19:29:12.729059 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:13.074938 env[1559]: time="2023-10-02T19:29:13.073778266Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:29:13.090202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171136308.mount: Deactivated successfully. Oct 2 19:29:13.101410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218987635.mount: Deactivated successfully. Oct 2 19:29:13.104561 env[1559]: time="2023-10-02T19:29:13.104477485Z" level=info msg="CreateContainer within sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d\"" Oct 2 19:29:13.109872 env[1559]: time="2023-10-02T19:29:13.109792707Z" level=info msg="StartContainer for \"5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d\"" Oct 2 19:29:13.161753 systemd[1]: Started cri-containerd-5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d.scope. Oct 2 19:29:13.198360 systemd[1]: cri-containerd-5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d.scope: Deactivated successfully. Oct 2 19:29:13.220029 env[1559]: time="2023-10-02T19:29:13.219962414Z" level=info msg="shim disconnected" id=5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d Oct 2 19:29:13.220373 env[1559]: time="2023-10-02T19:29:13.220339048Z" level=warning msg="cleaning up after shim disconnected" id=5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d namespace=k8s.io Oct 2 19:29:13.220518 env[1559]: time="2023-10-02T19:29:13.220489685Z" level=info msg="cleaning up dead shim" Oct 2 19:29:13.246285 env[1559]: time="2023-10-02T19:29:13.246219836Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:29:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2721 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:29:13Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:29:13.247008 env[1559]: time="2023-10-02T19:29:13.246927972Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:29:13.250110 env[1559]: time="2023-10-02T19:29:13.250030430Z" level=error msg="Failed to pipe stdout of container \"5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d\"" error="reading from a closed fifo" Oct 2 19:29:13.250333 env[1559]: time="2023-10-02T19:29:13.250030562Z" level=error msg="Failed to pipe stderr of container \"5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d\"" error="reading from a closed fifo" Oct 2 19:29:13.252259 env[1559]: time="2023-10-02T19:29:13.252183913Z" level=error msg="StartContainer for \"5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:29:13.252548 kubelet[2026]: E1002 19:29:13.252514 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d" Oct 2 19:29:13.252795 kubelet[2026]: E1002 19:29:13.252650 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:29:13.252795 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:29:13.252795 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:29:13.252795 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jvrgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:29:13.253090 kubelet[2026]: E1002 19:29:13.252749 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:29:13.362304 kubelet[2026]: I1002 19:29:13.361119 2026 scope.go:115] "RemoveContainer" containerID="d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37" Oct 2 19:29:13.362304 kubelet[2026]: I1002 19:29:13.361847 2026 scope.go:115] "RemoveContainer" containerID="d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37" Oct 2 19:29:13.364920 env[1559]: time="2023-10-02T19:29:13.364861452Z" level=info msg="RemoveContainer for \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\"" Oct 2 19:29:13.367722 env[1559]: time="2023-10-02T19:29:13.367647206Z" level=info msg="RemoveContainer for \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\"" Oct 2 19:29:13.367953 env[1559]: time="2023-10-02T19:29:13.367824722Z" level=error msg="RemoveContainer for \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\" failed" error="failed to set removing state for container \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\": container is already in removing state" Oct 2 19:29:13.368231 kubelet[2026]: E1002 19:29:13.368195 2026 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\": container is already in removing state" containerID="d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37" Oct 2 19:29:13.368352 kubelet[2026]: E1002 19:29:13.368279 2026 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37": container is already in removing state; Skipping pod "cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)" Oct 2 19:29:13.368886 kubelet[2026]: E1002 19:29:13.368849 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:29:13.370385 env[1559]: time="2023-10-02T19:29:13.370310954Z" level=info msg="RemoveContainer for \"d2e6b64020cfaf856d75f838b34eb8c071a8af9065fc99943fa3a93b7c7e2d37\" returns successfully" Oct 2 19:29:13.729302 kubelet[2026]: E1002 19:29:13.729216 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.085853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d-rootfs.mount: Deactivated successfully. Oct 2 19:29:14.729791 kubelet[2026]: E1002 19:29:14.729754 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.803833 kubelet[2026]: E1002 19:29:14.803780 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:15.731407 kubelet[2026]: E1002 19:29:15.731341 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:16.326873 kubelet[2026]: W1002 19:29:16.326798 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4eb36433_a779_48f6_9031_b14f9bb6926c.slice/cri-containerd-5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d.scope WatchSource:0}: task 5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d not found: not found Oct 2 19:29:16.732100 kubelet[2026]: E1002 19:29:16.732061 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:17.733874 kubelet[2026]: E1002 19:29:17.733810 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:18.734179 kubelet[2026]: E1002 19:29:18.734131 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:19.735117 kubelet[2026]: E1002 19:29:19.735069 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:19.805156 kubelet[2026]: E1002 19:29:19.805090 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:20.736730 kubelet[2026]: E1002 19:29:20.736624 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:21.737144 kubelet[2026]: E1002 19:29:21.737073 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:22.737976 kubelet[2026]: E1002 19:29:22.737929 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:23.739215 kubelet[2026]: E1002 19:29:23.739160 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.622034 kubelet[2026]: E1002 19:29:24.621991 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.740804 kubelet[2026]: E1002 19:29:24.740739 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.806554 kubelet[2026]: E1002 19:29:24.806521 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:25.741288 kubelet[2026]: E1002 19:29:25.741219 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:26.741951 kubelet[2026]: E1002 19:29:26.741878 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:27.069819 kubelet[2026]: E1002 19:29:27.069640 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:29:27.742443 kubelet[2026]: E1002 19:29:27.742396 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:28.743264 kubelet[2026]: E1002 19:29:28.743220 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:29.743995 kubelet[2026]: E1002 19:29:29.743948 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:29.807871 kubelet[2026]: E1002 19:29:29.807840 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:30.744803 kubelet[2026]: E1002 19:29:30.744755 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:31.745543 kubelet[2026]: E1002 19:29:31.745499 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:32.746617 kubelet[2026]: E1002 19:29:32.746547 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:33.747325 kubelet[2026]: E1002 19:29:33.747257 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.748461 kubelet[2026]: E1002 19:29:34.748389 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.809241 kubelet[2026]: E1002 19:29:34.809141 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:35.749377 kubelet[2026]: E1002 19:29:35.749333 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:36.750912 kubelet[2026]: E1002 19:29:36.750846 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:37.751294 kubelet[2026]: E1002 19:29:37.751218 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:38.070380 kubelet[2026]: E1002 19:29:38.069977 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:29:38.751731 kubelet[2026]: E1002 19:29:38.751667 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:39.752306 kubelet[2026]: E1002 19:29:39.752261 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:39.810450 kubelet[2026]: E1002 19:29:39.810399 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:40.753363 kubelet[2026]: E1002 19:29:40.753297 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:41.753911 kubelet[2026]: E1002 19:29:41.753790 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:42.754816 kubelet[2026]: E1002 19:29:42.754745 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:43.755875 kubelet[2026]: E1002 19:29:43.755606 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.621923 kubelet[2026]: E1002 19:29:44.621881 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.757059 kubelet[2026]: E1002 19:29:44.757017 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.811959 kubelet[2026]: E1002 19:29:44.811909 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:45.758076 kubelet[2026]: E1002 19:29:45.758005 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:46.759013 kubelet[2026]: E1002 19:29:46.758948 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:47.759180 kubelet[2026]: E1002 19:29:47.759108 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:48.760306 kubelet[2026]: E1002 19:29:48.760233 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:49.761041 kubelet[2026]: E1002 19:29:49.760977 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:49.813114 kubelet[2026]: E1002 19:29:49.813077 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:50.070391 kubelet[2026]: E1002 19:29:50.069941 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:29:50.761471 kubelet[2026]: E1002 19:29:50.761405 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:51.762452 kubelet[2026]: E1002 19:29:51.762385 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:52.762806 kubelet[2026]: E1002 19:29:52.762744 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:53.763351 kubelet[2026]: E1002 19:29:53.763279 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.764539 kubelet[2026]: E1002 19:29:54.764466 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.814586 kubelet[2026]: E1002 19:29:54.814533 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:55.764858 kubelet[2026]: E1002 19:29:55.764790 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:56.765657 kubelet[2026]: E1002 19:29:56.765607 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:57.766683 kubelet[2026]: E1002 19:29:57.766637 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:58.767790 kubelet[2026]: E1002 19:29:58.767685 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:59.768912 kubelet[2026]: E1002 19:29:59.768841 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:59.816392 kubelet[2026]: E1002 19:29:59.816346 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:00.769929 kubelet[2026]: E1002 19:30:00.769829 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:01.770969 kubelet[2026]: E1002 19:30:01.770899 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:02.771814 kubelet[2026]: E1002 19:30:02.771768 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:03.070570 kubelet[2026]: E1002 19:30:03.070166 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:30:03.773044 kubelet[2026]: E1002 19:30:03.772998 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.621789 kubelet[2026]: E1002 19:30:04.621747 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.774443 kubelet[2026]: E1002 19:30:04.774373 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.817480 kubelet[2026]: E1002 19:30:04.817410 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:05.774882 kubelet[2026]: E1002 19:30:05.774818 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:06.775594 kubelet[2026]: E1002 19:30:06.775534 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:07.777214 kubelet[2026]: E1002 19:30:07.777171 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:08.778252 kubelet[2026]: E1002 19:30:08.778187 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:09.779211 kubelet[2026]: E1002 19:30:09.779166 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:09.819297 kubelet[2026]: E1002 19:30:09.819247 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:10.780340 kubelet[2026]: E1002 19:30:10.780276 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:11.781510 kubelet[2026]: E1002 19:30:11.781442 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:12.782556 kubelet[2026]: E1002 19:30:12.782480 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:13.783633 kubelet[2026]: E1002 19:30:13.783588 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:14.070489 kubelet[2026]: E1002 19:30:14.070079 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:30:14.784817 kubelet[2026]: E1002 19:30:14.784772 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:14.820770 kubelet[2026]: E1002 19:30:14.820733 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:15.786570 kubelet[2026]: E1002 19:30:15.786502 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:16.787545 kubelet[2026]: E1002 19:30:16.787479 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:17.787751 kubelet[2026]: E1002 19:30:17.787673 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:18.788671 kubelet[2026]: E1002 19:30:18.788624 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:19.790209 kubelet[2026]: E1002 19:30:19.790137 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:19.822193 kubelet[2026]: E1002 19:30:19.822148 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:20.790387 kubelet[2026]: E1002 19:30:20.790325 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:21.790946 kubelet[2026]: E1002 19:30:21.790901 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:22.792323 kubelet[2026]: E1002 19:30:22.792269 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:23.793234 kubelet[2026]: E1002 19:30:23.793168 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.621953 kubelet[2026]: E1002 19:30:24.621907 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.793938 kubelet[2026]: E1002 19:30:24.793875 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.823386 kubelet[2026]: E1002 19:30:24.823356 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:25.794640 kubelet[2026]: E1002 19:30:25.794575 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:26.070032 kubelet[2026]: E1002 19:30:26.069366 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-w4m2l_kube-system(4eb36433-a779-48f6-9031-b14f9bb6926c)\"" pod="kube-system/cilium-w4m2l" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c Oct 2 19:30:26.795197 kubelet[2026]: E1002 19:30:26.795151 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:27.796589 kubelet[2026]: E1002 19:30:27.796548 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:28.797523 kubelet[2026]: E1002 19:30:28.797475 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:29.799198 kubelet[2026]: E1002 19:30:29.799126 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:29.825103 kubelet[2026]: E1002 19:30:29.825069 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:30.800117 kubelet[2026]: E1002 19:30:30.800057 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:31.800601 kubelet[2026]: E1002 19:30:31.800536 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:32.801628 kubelet[2026]: E1002 19:30:32.801565 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:33.167273 env[1559]: time="2023-10-02T19:30:33.167208302Z" level=info msg="StopPodSandbox for \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\"" Oct 2 19:30:33.171371 env[1559]: time="2023-10-02T19:30:33.167302046Z" level=info msg="Container to stop \"5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:30:33.169645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b-shm.mount: Deactivated successfully. Oct 2 19:30:33.191000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:30:33.192663 systemd[1]: cri-containerd-fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b.scope: Deactivated successfully. Oct 2 19:30:33.196383 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:30:33.196546 kernel: audit: type=1334 audit(1696275033.191:732): prog-id=80 op=UNLOAD Oct 2 19:30:33.202504 kernel: audit: type=1334 audit(1696275033.197:733): prog-id=83 op=UNLOAD Oct 2 19:30:33.197000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:30:33.245289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b-rootfs.mount: Deactivated successfully. Oct 2 19:30:33.262018 env[1559]: time="2023-10-02T19:30:33.261938824Z" level=info msg="shim disconnected" id=fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b Oct 2 19:30:33.262305 env[1559]: time="2023-10-02T19:30:33.262017617Z" level=warning msg="cleaning up after shim disconnected" id=fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b namespace=k8s.io Oct 2 19:30:33.262305 env[1559]: time="2023-10-02T19:30:33.262041005Z" level=info msg="cleaning up dead shim" Oct 2 19:30:33.289037 env[1559]: time="2023-10-02T19:30:33.288961116Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2758 runtime=io.containerd.runc.v2\n" Oct 2 19:30:33.289579 env[1559]: time="2023-10-02T19:30:33.289531549Z" level=info msg="TearDown network for sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" successfully" Oct 2 19:30:33.289689 env[1559]: time="2023-10-02T19:30:33.289579369Z" level=info msg="StopPodSandbox for \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" returns successfully" Oct 2 19:30:33.409671 kubelet[2026]: I1002 19:30:33.409610 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-bpf-maps\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.409943 kubelet[2026]: I1002 19:30:33.409698 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cni-path\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.409943 kubelet[2026]: I1002 19:30:33.409780 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-xtables-lock\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.409943 kubelet[2026]: I1002 19:30:33.409854 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eb36433-a779-48f6-9031-b14f9bb6926c-hubble-tls\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.409943 kubelet[2026]: I1002 19:30:33.409918 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-lib-modules\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410195 kubelet[2026]: I1002 19:30:33.409962 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-etc-cni-netd\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410195 kubelet[2026]: I1002 19:30:33.410028 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-host-proc-sys-net\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410195 kubelet[2026]: I1002 19:30:33.410100 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvrgm\" (UniqueName: \"kubernetes.io/projected/4eb36433-a779-48f6-9031-b14f9bb6926c-kube-api-access-jvrgm\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410195 kubelet[2026]: I1002 19:30:33.410144 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-run\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410438 kubelet[2026]: I1002 19:30:33.410218 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-config-path\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410438 kubelet[2026]: I1002 19:30:33.410285 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-host-proc-sys-kernel\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410438 kubelet[2026]: I1002 19:30:33.410331 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-hostproc\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410438 kubelet[2026]: I1002 19:30:33.410400 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eb36433-a779-48f6-9031-b14f9bb6926c-clustermesh-secrets\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410684 kubelet[2026]: I1002 19:30:33.410465 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-cgroup\") pod \"4eb36433-a779-48f6-9031-b14f9bb6926c\" (UID: \"4eb36433-a779-48f6-9031-b14f9bb6926c\") " Oct 2 19:30:33.410684 kubelet[2026]: I1002 19:30:33.410537 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.410684 kubelet[2026]: I1002 19:30:33.410584 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.410684 kubelet[2026]: I1002 19:30:33.410650 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cni-path" (OuterVolumeSpecName: "cni-path") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.410964 kubelet[2026]: I1002 19:30:33.410727 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.411751 kubelet[2026]: I1002 19:30:33.411094 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.411751 kubelet[2026]: I1002 19:30:33.411149 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.411751 kubelet[2026]: I1002 19:30:33.411190 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.411751 kubelet[2026]: I1002 19:30:33.411260 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.412100 kubelet[2026]: I1002 19:30:33.411932 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.412100 kubelet[2026]: I1002 19:30:33.412009 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-hostproc" (OuterVolumeSpecName: "hostproc") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:33.412684 kubelet[2026]: W1002 19:30:33.412635 2026 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4eb36433-a779-48f6-9031-b14f9bb6926c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:30:33.418048 kubelet[2026]: I1002 19:30:33.417905 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:30:33.427353 systemd[1]: var-lib-kubelet-pods-4eb36433\x2da779\x2d48f6\x2d9031\x2db14f9bb6926c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djvrgm.mount: Deactivated successfully. Oct 2 19:30:33.430949 systemd[1]: var-lib-kubelet-pods-4eb36433\x2da779\x2d48f6\x2d9031\x2db14f9bb6926c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:30:33.434615 kubelet[2026]: I1002 19:30:33.434546 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb36433-a779-48f6-9031-b14f9bb6926c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:30:33.435034 kubelet[2026]: I1002 19:30:33.434993 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb36433-a779-48f6-9031-b14f9bb6926c-kube-api-access-jvrgm" (OuterVolumeSpecName: "kube-api-access-jvrgm") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "kube-api-access-jvrgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:30:33.439063 systemd[1]: var-lib-kubelet-pods-4eb36433\x2da779\x2d48f6\x2d9031\x2db14f9bb6926c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:30:33.441677 kubelet[2026]: I1002 19:30:33.441606 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb36433-a779-48f6-9031-b14f9bb6926c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4eb36433-a779-48f6-9031-b14f9bb6926c" (UID: "4eb36433-a779-48f6-9031-b14f9bb6926c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:30:33.510989 kubelet[2026]: I1002 19:30:33.510939 2026 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4eb36433-a779-48f6-9031-b14f9bb6926c-clustermesh-secrets\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.510989 kubelet[2026]: I1002 19:30:33.510989 2026 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-cgroup\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511018 2026 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-host-proc-sys-kernel\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511047 2026 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-hostproc\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511070 2026 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-etc-cni-netd\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511093 2026 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-bpf-maps\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511114 2026 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cni-path\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511137 2026 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-xtables-lock\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511160 2026 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4eb36433-a779-48f6-9031-b14f9bb6926c-hubble-tls\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511182 2026 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-lib-modules\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511222 kubelet[2026]: I1002 19:30:33.511206 2026 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-host-proc-sys-net\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511792 kubelet[2026]: I1002 19:30:33.511229 2026 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-config-path\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511792 kubelet[2026]: I1002 19:30:33.511254 2026 reconciler.go:399] "Volume detached for volume \"kube-api-access-jvrgm\" (UniqueName: \"kubernetes.io/projected/4eb36433-a779-48f6-9031-b14f9bb6926c-kube-api-access-jvrgm\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.511792 kubelet[2026]: I1002 19:30:33.511276 2026 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4eb36433-a779-48f6-9031-b14f9bb6926c-cilium-run\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:33.521753 kubelet[2026]: I1002 19:30:33.521667 2026 scope.go:115] "RemoveContainer" containerID="5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d" Oct 2 19:30:33.523764 env[1559]: time="2023-10-02T19:30:33.523676450Z" level=info msg="RemoveContainer for \"5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d\"" Oct 2 19:30:33.528911 env[1559]: time="2023-10-02T19:30:33.528838525Z" level=info msg="RemoveContainer for \"5a09763be667e006b943ba4a13696d82dd28640892253060bee0af63414ffc8d\" returns successfully" Oct 2 19:30:33.531073 systemd[1]: Removed slice kubepods-burstable-pod4eb36433_a779_48f6_9031_b14f9bb6926c.slice. Oct 2 19:30:33.572328 kubelet[2026]: I1002 19:30:33.572285 2026 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:30:33.572599 kubelet[2026]: E1002 19:30:33.572575 2026 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.572762 kubelet[2026]: E1002 19:30:33.572741 2026 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.572876 kubelet[2026]: E1002 19:30:33.572855 2026 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.573007 kubelet[2026]: E1002 19:30:33.572986 2026 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.573136 kubelet[2026]: I1002 19:30:33.573115 2026 memory_manager.go:345] "RemoveStaleState removing state" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.573251 kubelet[2026]: I1002 19:30:33.573231 2026 memory_manager.go:345] "RemoveStaleState removing state" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.573359 kubelet[2026]: I1002 19:30:33.573340 2026 memory_manager.go:345] "RemoveStaleState removing state" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.573500 kubelet[2026]: E1002 19:30:33.573479 2026 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.573654 kubelet[2026]: I1002 19:30:33.573634 2026 memory_manager.go:345] "RemoveStaleState removing state" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.573811 kubelet[2026]: I1002 19:30:33.573790 2026 memory_manager.go:345] "RemoveStaleState removing state" podUID="4eb36433-a779-48f6-9031-b14f9bb6926c" containerName="mount-cgroup" Oct 2 19:30:33.584285 systemd[1]: Created slice kubepods-burstable-pod368583b9_3f06_40a5_9b5f_b7cc14d4706e.slice. Oct 2 19:30:33.612184 kubelet[2026]: I1002 19:30:33.612070 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-cgroup\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.612458 kubelet[2026]: I1002 19:30:33.612222 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cni-path\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.612458 kubelet[2026]: I1002 19:30:33.612336 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-host-proc-sys-net\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.612458 kubelet[2026]: I1002 19:30:33.612449 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-run\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.612643 kubelet[2026]: I1002 19:30:33.612558 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-etc-cni-netd\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.612740 kubelet[2026]: I1002 19:30:33.612660 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-lib-modules\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.612831 kubelet[2026]: I1002 19:30:33.612802 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/368583b9-3f06-40a5-9b5f-b7cc14d4706e-clustermesh-secrets\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.612900 kubelet[2026]: I1002 19:30:33.612881 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-host-proc-sys-kernel\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.613003 kubelet[2026]: I1002 19:30:33.612976 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/368583b9-3f06-40a5-9b5f-b7cc14d4706e-hubble-tls\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.613136 kubelet[2026]: I1002 19:30:33.613110 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-hostproc\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.613258 kubelet[2026]: I1002 19:30:33.613227 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-config-path\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.613358 kubelet[2026]: I1002 19:30:33.613344 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-bpf-maps\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.613481 kubelet[2026]: I1002 19:30:33.613438 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-xtables-lock\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.613597 kubelet[2026]: I1002 19:30:33.613566 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjz8d\" (UniqueName: \"kubernetes.io/projected/368583b9-3f06-40a5-9b5f-b7cc14d4706e-kube-api-access-mjz8d\") pod \"cilium-fnv62\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " pod="kube-system/cilium-fnv62" Oct 2 19:30:33.803776 kubelet[2026]: E1002 19:30:33.802755 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:33.897027 env[1559]: time="2023-10-02T19:30:33.896948869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnv62,Uid:368583b9-3f06-40a5-9b5f-b7cc14d4706e,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:33.930985 env[1559]: time="2023-10-02T19:30:33.930864683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:33.931219 env[1559]: time="2023-10-02T19:30:33.930941135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:33.931219 env[1559]: time="2023-10-02T19:30:33.931004747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:33.931614 env[1559]: time="2023-10-02T19:30:33.931468920Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e pid=2785 runtime=io.containerd.runc.v2 Oct 2 19:30:33.959940 systemd[1]: Started cri-containerd-59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e.scope. Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.014786 kernel: audit: type=1400 audit(1696275033.997:734): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.014917 kernel: audit: type=1400 audit(1696275033.997:735): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.014970 kernel: audit: type=1400 audit(1696275033.997:736): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.022382 kernel: audit: type=1400 audit(1696275033.997:737): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.037609 kernel: audit: type=1400 audit(1696275033.997:738): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.045469 kernel: audit: type=1400 audit(1696275033.997:739): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.053524 kernel: audit: type=1400 audit(1696275033.997:740): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.061460 kernel: audit: type=1400 audit(1696275033.997:741): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.997000 audit: BPF prog-id=87 op=LOAD Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2785 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:33.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666164666238653566613836636265636539313636336163666538 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2785 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:33.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666164666238653566613836636265636539313636336163666538 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:33.998000 audit: BPF prog-id=88 op=LOAD Oct 2 19:30:33.998000 audit[2794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2785 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:33.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666164666238653566613836636265636539313636336163666538 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.005000 audit: BPF prog-id=89 op=LOAD Oct 2 19:30:34.005000 audit[2794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2785 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:34.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666164666238653566613836636265636539313636336163666538 Oct 2 19:30:34.013000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:30:34.013000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { perfmon } for pid=2794 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit[2794]: AVC avc: denied { bpf } for pid=2794 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:34.013000 audit: BPF prog-id=90 op=LOAD Oct 2 19:30:34.013000 audit[2794]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2785 pid=2794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:34.013000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539666164666238653566613836636265636539313636336163666538 Oct 2 19:30:34.088045 env[1559]: time="2023-10-02T19:30:34.087965537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnv62,Uid:368583b9-3f06-40a5-9b5f-b7cc14d4706e,Namespace:kube-system,Attempt:0,} returns sandbox id \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\"" Oct 2 19:30:34.093195 env[1559]: time="2023-10-02T19:30:34.093118935Z" level=info msg="CreateContainer within sandbox \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:30:34.112683 env[1559]: time="2023-10-02T19:30:34.112596715Z" level=info msg="CreateContainer within sandbox \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f\"" Oct 2 19:30:34.113645 env[1559]: time="2023-10-02T19:30:34.113599101Z" level=info msg="StartContainer for \"5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f\"" Oct 2 19:30:34.156479 systemd[1]: Started cri-containerd-5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f.scope. Oct 2 19:30:34.204805 systemd[1]: cri-containerd-5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f.scope: Deactivated successfully. Oct 2 19:30:34.217143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f-rootfs.mount: Deactivated successfully. Oct 2 19:30:34.240164 env[1559]: time="2023-10-02T19:30:34.240094444Z" level=info msg="shim disconnected" id=5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f Oct 2 19:30:34.240902 env[1559]: time="2023-10-02T19:30:34.240865410Z" level=warning msg="cleaning up after shim disconnected" id=5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f namespace=k8s.io Oct 2 19:30:34.241060 env[1559]: time="2023-10-02T19:30:34.241029438Z" level=info msg="cleaning up dead shim" Oct 2 19:30:34.266858 env[1559]: time="2023-10-02T19:30:34.266793311Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2844 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:34Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:34.267531 env[1559]: time="2023-10-02T19:30:34.267452928Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Oct 2 19:30:34.267958 env[1559]: time="2023-10-02T19:30:34.267870181Z" level=error msg="Failed to pipe stdout of container \"5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f\"" error="reading from a closed fifo" Oct 2 19:30:34.269896 env[1559]: time="2023-10-02T19:30:34.269823029Z" level=error msg="Failed to pipe stderr of container \"5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f\"" error="reading from a closed fifo" Oct 2 19:30:34.272382 env[1559]: time="2023-10-02T19:30:34.272275234Z" level=error msg="StartContainer for \"5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:34.273434 kubelet[2026]: E1002 19:30:34.272753 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f" Oct 2 19:30:34.273434 kubelet[2026]: E1002 19:30:34.272934 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:34.273434 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:34.273434 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:30:34.273868 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mjz8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-fnv62_kube-system(368583b9-3f06-40a5-9b5f-b7cc14d4706e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:34.274030 kubelet[2026]: E1002 19:30:34.273022 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-fnv62" podUID=368583b9-3f06-40a5-9b5f-b7cc14d4706e Oct 2 19:30:34.529001 env[1559]: time="2023-10-02T19:30:34.528937699Z" level=info msg="StopPodSandbox for \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\"" Oct 2 19:30:34.529280 env[1559]: time="2023-10-02T19:30:34.529023199Z" level=info msg="Container to stop \"5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:30:34.531334 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e-shm.mount: Deactivated successfully. Oct 2 19:30:34.553469 systemd[1]: cri-containerd-59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e.scope: Deactivated successfully. Oct 2 19:30:34.552000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:30:34.556000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:30:34.600220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e-rootfs.mount: Deactivated successfully. Oct 2 19:30:34.616449 env[1559]: time="2023-10-02T19:30:34.616371186Z" level=info msg="shim disconnected" id=59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e Oct 2 19:30:34.616736 env[1559]: time="2023-10-02T19:30:34.616450686Z" level=warning msg="cleaning up after shim disconnected" id=59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e namespace=k8s.io Oct 2 19:30:34.616736 env[1559]: time="2023-10-02T19:30:34.616475515Z" level=info msg="cleaning up dead shim" Oct 2 19:30:34.642676 env[1559]: time="2023-10-02T19:30:34.642582060Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2876 runtime=io.containerd.runc.v2\n" Oct 2 19:30:34.643261 env[1559]: time="2023-10-02T19:30:34.643202869Z" level=info msg="TearDown network for sandbox \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\" successfully" Oct 2 19:30:34.643408 env[1559]: time="2023-10-02T19:30:34.643256125Z" level=info msg="StopPodSandbox for \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\" returns successfully" Oct 2 19:30:34.726561 kubelet[2026]: I1002 19:30:34.726481 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-config-path\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.726828 kubelet[2026]: I1002 19:30:34.726576 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-etc-cni-netd\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.726828 kubelet[2026]: I1002 19:30:34.726650 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/368583b9-3f06-40a5-9b5f-b7cc14d4706e-clustermesh-secrets\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.726828 kubelet[2026]: I1002 19:30:34.726757 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-hostproc\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.726828 kubelet[2026]: I1002 19:30:34.726825 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-lib-modules\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727094 kubelet[2026]: I1002 19:30:34.726890 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-host-proc-sys-net\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727094 kubelet[2026]: I1002 19:30:34.726957 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-run\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727094 kubelet[2026]: I1002 19:30:34.727005 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/368583b9-3f06-40a5-9b5f-b7cc14d4706e-hubble-tls\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727094 kubelet[2026]: I1002 19:30:34.727069 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-bpf-maps\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727337 kubelet[2026]: I1002 19:30:34.727109 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cni-path\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727337 kubelet[2026]: I1002 19:30:34.727176 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-cgroup\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727337 kubelet[2026]: I1002 19:30:34.727246 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-host-proc-sys-kernel\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727337 kubelet[2026]: I1002 19:30:34.727311 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-xtables-lock\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.727577 kubelet[2026]: I1002 19:30:34.727359 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjz8d\" (UniqueName: \"kubernetes.io/projected/368583b9-3f06-40a5-9b5f-b7cc14d4706e-kube-api-access-mjz8d\") pod \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\" (UID: \"368583b9-3f06-40a5-9b5f-b7cc14d4706e\") " Oct 2 19:30:34.729729 kubelet[2026]: I1002 19:30:34.727689 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.729729 kubelet[2026]: W1002 19:30:34.727975 2026 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/368583b9-3f06-40a5-9b5f-b7cc14d4706e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:30:34.729729 kubelet[2026]: I1002 19:30:34.728565 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.729729 kubelet[2026]: I1002 19:30:34.728635 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cni-path" (OuterVolumeSpecName: "cni-path") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.729729 kubelet[2026]: I1002 19:30:34.728677 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.730132 kubelet[2026]: I1002 19:30:34.728764 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.730132 kubelet[2026]: I1002 19:30:34.728807 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.730132 kubelet[2026]: I1002 19:30:34.729866 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.730314 kubelet[2026]: I1002 19:30:34.730256 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-hostproc" (OuterVolumeSpecName: "hostproc") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.730314 kubelet[2026]: I1002 19:30:34.730301 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.730440 kubelet[2026]: I1002 19:30:34.730339 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:30:34.733406 kubelet[2026]: I1002 19:30:34.733350 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:30:34.741142 systemd[1]: var-lib-kubelet-pods-368583b9\x2d3f06\x2d40a5\x2d9b5f\x2db7cc14d4706e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:30:34.746063 kubelet[2026]: I1002 19:30:34.745993 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368583b9-3f06-40a5-9b5f-b7cc14d4706e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:30:34.747405 kubelet[2026]: I1002 19:30:34.747338 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/368583b9-3f06-40a5-9b5f-b7cc14d4706e-kube-api-access-mjz8d" (OuterVolumeSpecName: "kube-api-access-mjz8d") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "kube-api-access-mjz8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:30:34.752506 kubelet[2026]: I1002 19:30:34.752453 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/368583b9-3f06-40a5-9b5f-b7cc14d4706e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "368583b9-3f06-40a5-9b5f-b7cc14d4706e" (UID: "368583b9-3f06-40a5-9b5f-b7cc14d4706e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:30:34.803646 kubelet[2026]: E1002 19:30:34.803505 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:34.827021 kubelet[2026]: E1002 19:30:34.826946 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:34.828115 kubelet[2026]: I1002 19:30:34.828089 2026 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/368583b9-3f06-40a5-9b5f-b7cc14d4706e-hubble-tls\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.828256 kubelet[2026]: I1002 19:30:34.828233 2026 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-bpf-maps\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.828394 kubelet[2026]: I1002 19:30:34.828374 2026 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cni-path\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.828544 kubelet[2026]: I1002 19:30:34.828524 2026 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-host-proc-sys-net\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.828686 kubelet[2026]: I1002 19:30:34.828667 2026 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-run\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.828924 kubelet[2026]: I1002 19:30:34.828903 2026 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-host-proc-sys-kernel\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.829073 kubelet[2026]: I1002 19:30:34.829054 2026 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-xtables-lock\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.829210 kubelet[2026]: I1002 19:30:34.829191 2026 reconciler.go:399] "Volume detached for volume \"kube-api-access-mjz8d\" (UniqueName: \"kubernetes.io/projected/368583b9-3f06-40a5-9b5f-b7cc14d4706e-kube-api-access-mjz8d\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.829351 kubelet[2026]: I1002 19:30:34.829332 2026 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-cgroup\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.829496 kubelet[2026]: I1002 19:30:34.829477 2026 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-etc-cni-netd\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.829643 kubelet[2026]: I1002 19:30:34.829624 2026 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/368583b9-3f06-40a5-9b5f-b7cc14d4706e-cilium-config-path\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.829802 kubelet[2026]: I1002 19:30:34.829783 2026 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-hostproc\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.829948 kubelet[2026]: I1002 19:30:34.829929 2026 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368583b9-3f06-40a5-9b5f-b7cc14d4706e-lib-modules\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:34.830086 kubelet[2026]: I1002 19:30:34.830067 2026 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/368583b9-3f06-40a5-9b5f-b7cc14d4706e-clustermesh-secrets\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:30:35.074413 kubelet[2026]: I1002 19:30:35.074296 2026 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4eb36433-a779-48f6-9031-b14f9bb6926c path="/var/lib/kubelet/pods/4eb36433-a779-48f6-9031-b14f9bb6926c/volumes" Oct 2 19:30:35.083377 systemd[1]: Removed slice kubepods-burstable-pod368583b9_3f06_40a5_9b5f_b7cc14d4706e.slice. Oct 2 19:30:35.169736 systemd[1]: var-lib-kubelet-pods-368583b9\x2d3f06\x2d40a5\x2d9b5f\x2db7cc14d4706e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmjz8d.mount: Deactivated successfully. Oct 2 19:30:35.169917 systemd[1]: var-lib-kubelet-pods-368583b9\x2d3f06\x2d40a5\x2d9b5f\x2db7cc14d4706e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:30:35.532057 kubelet[2026]: I1002 19:30:35.532022 2026 scope.go:115] "RemoveContainer" containerID="5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f" Oct 2 19:30:35.537211 env[1559]: time="2023-10-02T19:30:35.536797859Z" level=info msg="RemoveContainer for \"5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f\"" Oct 2 19:30:35.541033 env[1559]: time="2023-10-02T19:30:35.540881011Z" level=info msg="RemoveContainer for \"5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f\" returns successfully" Oct 2 19:30:35.804741 kubelet[2026]: E1002 19:30:35.804583 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:36.805285 kubelet[2026]: E1002 19:30:36.805238 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:37.072699 kubelet[2026]: I1002 19:30:37.072593 2026 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=368583b9-3f06-40a5-9b5f-b7cc14d4706e path="/var/lib/kubelet/pods/368583b9-3f06-40a5-9b5f-b7cc14d4706e/volumes" Oct 2 19:30:37.347694 kubelet[2026]: W1002 19:30:37.347557 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod368583b9_3f06_40a5_9b5f_b7cc14d4706e.slice/cri-containerd-5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f.scope WatchSource:0}: container "5bbd68a8655e69e4c530303836f8739f87e67895ef2fdaaed509a0486f2fb15f" in namespace "k8s.io": not found Oct 2 19:30:37.806729 kubelet[2026]: E1002 19:30:37.806605 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:38.003203 kubelet[2026]: I1002 19:30:38.003138 2026 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:30:38.003468 kubelet[2026]: E1002 19:30:38.003434 2026 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="368583b9-3f06-40a5-9b5f-b7cc14d4706e" containerName="mount-cgroup" Oct 2 19:30:38.003634 kubelet[2026]: I1002 19:30:38.003613 2026 memory_manager.go:345] "RemoveStaleState removing state" podUID="368583b9-3f06-40a5-9b5f-b7cc14d4706e" containerName="mount-cgroup" Oct 2 19:30:38.016201 systemd[1]: Created slice kubepods-burstable-podb467b741_6355_4bd6_900d_c17a09fedbd4.slice. Oct 2 19:30:38.021642 kubelet[2026]: I1002 19:30:38.021553 2026 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:30:38.034015 systemd[1]: Created slice kubepods-besteffort-pod5ee57062_b86a_48e6_a20c_c7979594f5fc.slice. Oct 2 19:30:38.046803 kubelet[2026]: I1002 19:30:38.046740 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-run\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047011 kubelet[2026]: I1002 19:30:38.046827 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-lib-modules\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047011 kubelet[2026]: I1002 19:30:38.046874 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-xtables-lock\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047011 kubelet[2026]: I1002 19:30:38.046921 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b467b741-6355-4bd6-900d-c17a09fedbd4-clustermesh-secrets\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047011 kubelet[2026]: I1002 19:30:38.046969 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-host-proc-sys-kernel\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047252 kubelet[2026]: I1002 19:30:38.047014 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b467b741-6355-4bd6-900d-c17a09fedbd4-hubble-tls\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047252 kubelet[2026]: I1002 19:30:38.047058 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-cgroup\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047252 kubelet[2026]: I1002 19:30:38.047099 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-host-proc-sys-net\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047252 kubelet[2026]: I1002 19:30:38.047146 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gksd9\" (UniqueName: \"kubernetes.io/projected/5ee57062-b86a-48e6-a20c-c7979594f5fc-kube-api-access-gksd9\") pod \"cilium-operator-69b677f97c-pgsfh\" (UID: \"5ee57062-b86a-48e6-a20c-c7979594f5fc\") " pod="kube-system/cilium-operator-69b677f97c-pgsfh" Oct 2 19:30:38.047252 kubelet[2026]: I1002 19:30:38.047194 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-hostproc\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047252 kubelet[2026]: I1002 19:30:38.047240 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cni-path\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047605 kubelet[2026]: I1002 19:30:38.047283 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-etc-cni-netd\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047605 kubelet[2026]: I1002 19:30:38.047325 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-config-path\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047605 kubelet[2026]: I1002 19:30:38.047368 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-bpf-maps\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047605 kubelet[2026]: I1002 19:30:38.047410 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-ipsec-secrets\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047605 kubelet[2026]: I1002 19:30:38.047468 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7hr\" (UniqueName: \"kubernetes.io/projected/b467b741-6355-4bd6-900d-c17a09fedbd4-kube-api-access-jc7hr\") pod \"cilium-68sn9\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " pod="kube-system/cilium-68sn9" Oct 2 19:30:38.047960 kubelet[2026]: I1002 19:30:38.047516 2026 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ee57062-b86a-48e6-a20c-c7979594f5fc-cilium-config-path\") pod \"cilium-operator-69b677f97c-pgsfh\" (UID: \"5ee57062-b86a-48e6-a20c-c7979594f5fc\") " pod="kube-system/cilium-operator-69b677f97c-pgsfh" Oct 2 19:30:38.327686 env[1559]: time="2023-10-02T19:30:38.327171312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-68sn9,Uid:b467b741-6355-4bd6-900d-c17a09fedbd4,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:38.340629 env[1559]: time="2023-10-02T19:30:38.340552179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-pgsfh,Uid:5ee57062-b86a-48e6-a20c-c7979594f5fc,Namespace:kube-system,Attempt:0,}" Oct 2 19:30:38.367398 env[1559]: time="2023-10-02T19:30:38.367237581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:38.367755 env[1559]: time="2023-10-02T19:30:38.367659886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:38.367994 env[1559]: time="2023-10-02T19:30:38.367930594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:38.368667 env[1559]: time="2023-10-02T19:30:38.368595803Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf pid=2907 runtime=io.containerd.runc.v2 Oct 2 19:30:38.381186 env[1559]: time="2023-10-02T19:30:38.381065989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:30:38.381984 env[1559]: time="2023-10-02T19:30:38.381886682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:30:38.382245 env[1559]: time="2023-10-02T19:30:38.382185327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:30:38.382964 env[1559]: time="2023-10-02T19:30:38.382896172Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d pid=2923 runtime=io.containerd.runc.v2 Oct 2 19:30:38.406079 systemd[1]: Started cri-containerd-1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf.scope. Oct 2 19:30:38.430751 systemd[1]: Started cri-containerd-96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d.scope. Oct 2 19:30:38.453623 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 19:30:38.453807 kernel: audit: type=1400 audit(1696275038.449:754): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.470872 kernel: audit: type=1400 audit(1696275038.449:755): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.471032 kernel: audit: type=1400 audit(1696275038.449:756): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.483864 kernel: audit: type=1400 audit(1696275038.449:757): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.498755 kernel: audit: type=1400 audit(1696275038.449:758): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.513820 kernel: audit: type=1400 audit(1696275038.449:759): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.519025 kernel: audit: type=1400 audit(1696275038.449:760): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.532304 kernel: audit: type=1400 audit(1696275038.449:761): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.449000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.540901 kernel: audit: type=1400 audit(1696275038.449:762): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.452000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555829 kernel: audit: type=1400 audit(1696275038.452:763): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.452000 audit: BPF prog-id=91 op=LOAD Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000115b38 a2=10 a3=0 items=0 ppid=2907 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363036383261623761636562623237313538323733633537346238 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001155a0 a2=3c a3=0 items=0 ppid=2907 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363036383261623761636562623237313538323733633537346238 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.453000 audit: BPF prog-id=92 op=LOAD Oct 2 19:30:38.453000 audit[2925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001158e0 a2=78 a3=0 items=0 ppid=2907 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363036383261623761636562623237313538323733633537346238 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.461000 audit: BPF prog-id=93 op=LOAD Oct 2 19:30:38.461000 audit[2925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000115670 a2=78 a3=0 items=0 ppid=2907 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.461000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363036383261623761636562623237313538323733633537346238 Oct 2 19:30:38.469000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:30:38.469000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { perfmon } for pid=2925 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit[2925]: AVC avc: denied { bpf } for pid=2925 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.469000 audit: BPF prog-id=94 op=LOAD Oct 2 19:30:38.469000 audit[2925]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000115b40 a2=78 a3=0 items=0 ppid=2907 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165363036383261623761636562623237313538323733633537346238 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.545000 audit: BPF prog-id=95 op=LOAD Oct 2 19:30:38.546000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.546000 audit[2938]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2923 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936613339393764313639356536356136653130646139383263356330 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2923 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936613339393764313639356536356136653130646139383263356330 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit: BPF prog-id=96 op=LOAD Oct 2 19:30:38.554000 audit[2938]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2923 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936613339393764313639356536356136653130646139383263356330 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.554000 audit: BPF prog-id=97 op=LOAD Oct 2 19:30:38.554000 audit[2938]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2923 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936613339393764313639356536356136653130646139383263356330 Oct 2 19:30:38.555000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:30:38.555000 audit: BPF prog-id=96 op=UNLOAD Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { perfmon } for pid=2938 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit[2938]: AVC avc: denied { bpf } for pid=2938 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:38.555000 audit: BPF prog-id=98 op=LOAD Oct 2 19:30:38.555000 audit[2938]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2923 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:38.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936613339393764313639356536356136653130646139383263356330 Oct 2 19:30:38.580145 env[1559]: time="2023-10-02T19:30:38.579971562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-68sn9,Uid:b467b741-6355-4bd6-900d-c17a09fedbd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\"" Oct 2 19:30:38.589553 env[1559]: time="2023-10-02T19:30:38.588933457Z" level=info msg="CreateContainer within sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:30:38.616789 env[1559]: time="2023-10-02T19:30:38.616722669Z" level=info msg="CreateContainer within sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\"" Oct 2 19:30:38.618034 env[1559]: time="2023-10-02T19:30:38.617984279Z" level=info msg="StartContainer for \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\"" Oct 2 19:30:38.633940 env[1559]: time="2023-10-02T19:30:38.633882955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-pgsfh,Uid:5ee57062-b86a-48e6-a20c-c7979594f5fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\"" Oct 2 19:30:38.637350 env[1559]: time="2023-10-02T19:30:38.637297682Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:30:38.666953 systemd[1]: Started cri-containerd-6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835.scope. Oct 2 19:30:38.708138 systemd[1]: cri-containerd-6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835.scope: Deactivated successfully. Oct 2 19:30:38.737535 env[1559]: time="2023-10-02T19:30:38.737459965Z" level=info msg="shim disconnected" id=6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835 Oct 2 19:30:38.737846 env[1559]: time="2023-10-02T19:30:38.737534305Z" level=warning msg="cleaning up after shim disconnected" id=6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835 namespace=k8s.io Oct 2 19:30:38.737846 env[1559]: time="2023-10-02T19:30:38.737559457Z" level=info msg="cleaning up dead shim" Oct 2 19:30:38.764789 env[1559]: time="2023-10-02T19:30:38.764688704Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3007 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:38.765269 env[1559]: time="2023-10-02T19:30:38.765180645Z" level=error msg="copy shim log" error="read /proc/self/fd/37: file already closed" Oct 2 19:30:38.766844 env[1559]: time="2023-10-02T19:30:38.766787784Z" level=error msg="Failed to pipe stdout of container \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\"" error="reading from a closed fifo" Oct 2 19:30:38.767355 env[1559]: time="2023-10-02T19:30:38.767003412Z" level=error msg="Failed to pipe stderr of container \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\"" error="reading from a closed fifo" Oct 2 19:30:38.769141 env[1559]: time="2023-10-02T19:30:38.769078528Z" level=error msg="StartContainer for \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:38.770221 kubelet[2026]: E1002 19:30:38.769590 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835" Oct 2 19:30:38.770221 kubelet[2026]: E1002 19:30:38.770081 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:38.770221 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:38.770221 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:30:38.770627 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jc7hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:38.770814 kubelet[2026]: E1002 19:30:38.770166 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:30:38.807139 kubelet[2026]: E1002 19:30:38.807022 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:39.554834 env[1559]: time="2023-10-02T19:30:39.554777712Z" level=info msg="CreateContainer within sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:30:39.582343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484027610.mount: Deactivated successfully. Oct 2 19:30:39.591073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867799447.mount: Deactivated successfully. Oct 2 19:30:39.598085 env[1559]: time="2023-10-02T19:30:39.598012083Z" level=info msg="CreateContainer within sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\"" Oct 2 19:30:39.600101 env[1559]: time="2023-10-02T19:30:39.600025711Z" level=info msg="StartContainer for \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\"" Oct 2 19:30:39.653971 systemd[1]: Started cri-containerd-f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0.scope. Oct 2 19:30:39.692921 systemd[1]: cri-containerd-f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0.scope: Deactivated successfully. Oct 2 19:30:39.713865 env[1559]: time="2023-10-02T19:30:39.713792120Z" level=info msg="shim disconnected" id=f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0 Oct 2 19:30:39.714186 env[1559]: time="2023-10-02T19:30:39.714152781Z" level=warning msg="cleaning up after shim disconnected" id=f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0 namespace=k8s.io Oct 2 19:30:39.714307 env[1559]: time="2023-10-02T19:30:39.714278613Z" level=info msg="cleaning up dead shim" Oct 2 19:30:39.741426 env[1559]: time="2023-10-02T19:30:39.741361552Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3045 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:39.742161 env[1559]: time="2023-10-02T19:30:39.742079093Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Oct 2 19:30:39.742924 env[1559]: time="2023-10-02T19:30:39.742856407Z" level=error msg="Failed to pipe stderr of container \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\"" error="reading from a closed fifo" Oct 2 19:30:39.746833 env[1559]: time="2023-10-02T19:30:39.746765811Z" level=error msg="Failed to pipe stdout of container \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\"" error="reading from a closed fifo" Oct 2 19:30:39.749383 env[1559]: time="2023-10-02T19:30:39.749269124Z" level=error msg="StartContainer for \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:39.749672 kubelet[2026]: E1002 19:30:39.749621 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0" Oct 2 19:30:39.749902 kubelet[2026]: E1002 19:30:39.749798 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:39.749902 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:39.749902 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:30:39.749902 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jc7hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:39.750199 kubelet[2026]: E1002 19:30:39.749857 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:30:39.807558 kubelet[2026]: E1002 19:30:39.807390 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:39.832248 kubelet[2026]: E1002 19:30:39.832197 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:40.556121 kubelet[2026]: I1002 19:30:40.556070 2026 scope.go:115] "RemoveContainer" containerID="6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835" Oct 2 19:30:40.556665 kubelet[2026]: I1002 19:30:40.556619 2026 scope.go:115] "RemoveContainer" containerID="6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835" Oct 2 19:30:40.559089 env[1559]: time="2023-10-02T19:30:40.559026839Z" level=info msg="RemoveContainer for \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\"" Oct 2 19:30:40.565673 env[1559]: time="2023-10-02T19:30:40.565596168Z" level=info msg="RemoveContainer for \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\" returns successfully" Oct 2 19:30:40.566354 env[1559]: time="2023-10-02T19:30:40.566292829Z" level=info msg="RemoveContainer for \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\"" Oct 2 19:30:40.566469 env[1559]: time="2023-10-02T19:30:40.566352962Z" level=info msg="RemoveContainer for \"6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835\" returns successfully" Oct 2 19:30:40.566989 kubelet[2026]: E1002 19:30:40.566944 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4)\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:30:40.808410 kubelet[2026]: E1002 19:30:40.808267 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:40.967612 env[1559]: time="2023-10-02T19:30:40.967548327Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.971401 env[1559]: time="2023-10-02T19:30:40.971352911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.974227 env[1559]: time="2023-10-02T19:30:40.974164900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:30:40.975421 env[1559]: time="2023-10-02T19:30:40.975351199Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 19:30:40.980069 env[1559]: time="2023-10-02T19:30:40.980005600Z" level=info msg="CreateContainer within sandbox \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:30:41.003500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount122904296.mount: Deactivated successfully. Oct 2 19:30:41.013315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3715643865.mount: Deactivated successfully. Oct 2 19:30:41.023736 env[1559]: time="2023-10-02T19:30:41.023636476Z" level=info msg="CreateContainer within sandbox \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\"" Oct 2 19:30:41.025440 env[1559]: time="2023-10-02T19:30:41.025369183Z" level=info msg="StartContainer for \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\"" Oct 2 19:30:41.067394 systemd[1]: Started cri-containerd-fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861.scope. Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.106000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.107000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.107000 audit: BPF prog-id=99 op=LOAD Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2923 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633364323832623166353435363964633963613536373933373730 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2923 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633364323832623166353435363964633963613536373933373730 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit: BPF prog-id=100 op=LOAD Oct 2 19:30:41.108000 audit[3066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2923 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633364323832623166353435363964633963613536373933373730 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.108000 audit: BPF prog-id=101 op=LOAD Oct 2 19:30:41.108000 audit[3066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2923 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633364323832623166353435363964633963613536373933373730 Oct 2 19:30:41.109000 audit: BPF prog-id=101 op=UNLOAD Oct 2 19:30:41.109000 audit: BPF prog-id=100 op=UNLOAD Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { perfmon } for pid=3066 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit[3066]: AVC avc: denied { bpf } for pid=3066 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:41.109000 audit: BPF prog-id=102 op=LOAD Oct 2 19:30:41.109000 audit[3066]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2923 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:41.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665633364323832623166353435363964633963613536373933373730 Oct 2 19:30:41.144648 env[1559]: time="2023-10-02T19:30:41.144575898Z" level=info msg="StartContainer for \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\" returns successfully" Oct 2 19:30:41.223000 audit[3078]: AVC avc: denied { map_create } for pid=3078 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c1,c597 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c1,c597 tclass=bpf permissive=0 Oct 2 19:30:41.223000 audit[3078]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=4000697768 a2=48 a3=0 items=0 ppid=2923 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c1,c597 key=(null) Oct 2 19:30:41.223000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:30:41.561422 kubelet[2026]: E1002 19:30:41.561362 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4)\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:30:41.809176 kubelet[2026]: E1002 19:30:41.809136 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:41.843149 kubelet[2026]: W1002 19:30:41.843007 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb467b741_6355_4bd6_900d_c17a09fedbd4.slice/cri-containerd-6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835.scope WatchSource:0}: container "6f23c647eb6b0c42a88562e139813d05da7a73af0606d895eff438471bd66835" in namespace "k8s.io": not found Oct 2 19:30:42.810633 kubelet[2026]: E1002 19:30:42.810587 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:43.811863 kubelet[2026]: E1002 19:30:43.811783 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:44.621580 kubelet[2026]: E1002 19:30:44.621541 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:44.812768 kubelet[2026]: E1002 19:30:44.812686 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:44.833656 kubelet[2026]: E1002 19:30:44.833623 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:44.952800 kubelet[2026]: W1002 19:30:44.952752 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb467b741_6355_4bd6_900d_c17a09fedbd4.slice/cri-containerd-f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0.scope WatchSource:0}: task f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0 not found: not found Oct 2 19:30:45.813113 kubelet[2026]: E1002 19:30:45.813065 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:46.814495 kubelet[2026]: E1002 19:30:46.814423 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:47.814659 kubelet[2026]: E1002 19:30:47.814587 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:48.815504 kubelet[2026]: E1002 19:30:48.815437 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:49.815997 kubelet[2026]: E1002 19:30:49.815948 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:49.834991 kubelet[2026]: E1002 19:30:49.834923 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:50.817253 kubelet[2026]: E1002 19:30:50.817208 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:51.818194 kubelet[2026]: E1002 19:30:51.818151 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:52.819299 kubelet[2026]: E1002 19:30:52.819229 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:53.819865 kubelet[2026]: E1002 19:30:53.819807 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:54.072943 env[1559]: time="2023-10-02T19:30:54.072773621Z" level=info msg="CreateContainer within sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:30:54.091897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953943577.mount: Deactivated successfully. Oct 2 19:30:54.102116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598573029.mount: Deactivated successfully. Oct 2 19:30:54.110897 env[1559]: time="2023-10-02T19:30:54.110838727Z" level=info msg="CreateContainer within sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\"" Oct 2 19:30:54.112077 env[1559]: time="2023-10-02T19:30:54.112028997Z" level=info msg="StartContainer for \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\"" Oct 2 19:30:54.164251 systemd[1]: Started cri-containerd-435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2.scope. Oct 2 19:30:54.199684 systemd[1]: cri-containerd-435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2.scope: Deactivated successfully. Oct 2 19:30:54.467431 env[1559]: time="2023-10-02T19:30:54.467355758Z" level=info msg="shim disconnected" id=435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2 Oct 2 19:30:54.467431 env[1559]: time="2023-10-02T19:30:54.467430878Z" level=warning msg="cleaning up after shim disconnected" id=435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2 namespace=k8s.io Oct 2 19:30:54.467833 env[1559]: time="2023-10-02T19:30:54.467453607Z" level=info msg="cleaning up dead shim" Oct 2 19:30:54.493686 env[1559]: time="2023-10-02T19:30:54.493612157Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3122 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:54Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:54.494197 env[1559]: time="2023-10-02T19:30:54.494101734Z" level=error msg="copy shim log" error="read /proc/self/fd/50: file already closed" Oct 2 19:30:54.497221 env[1559]: time="2023-10-02T19:30:54.497157684Z" level=error msg="Failed to pipe stderr of container \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\"" error="reading from a closed fifo" Oct 2 19:30:54.497221 env[1559]: time="2023-10-02T19:30:54.497138448Z" level=error msg="Failed to pipe stdout of container \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\"" error="reading from a closed fifo" Oct 2 19:30:54.500110 env[1559]: time="2023-10-02T19:30:54.500021802Z" level=error msg="StartContainer for \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:54.500638 kubelet[2026]: E1002 19:30:54.500597 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2" Oct 2 19:30:54.501523 kubelet[2026]: E1002 19:30:54.501443 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:54.501523 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:54.501523 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:30:54.501523 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jc7hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:54.501936 kubelet[2026]: E1002 19:30:54.501619 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:30:54.596049 kubelet[2026]: I1002 19:30:54.595991 2026 scope.go:115] "RemoveContainer" containerID="f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0" Oct 2 19:30:54.596769 kubelet[2026]: I1002 19:30:54.596692 2026 scope.go:115] "RemoveContainer" containerID="f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0" Oct 2 19:30:54.599589 env[1559]: time="2023-10-02T19:30:54.599506759Z" level=info msg="RemoveContainer for \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\"" Oct 2 19:30:54.600550 env[1559]: time="2023-10-02T19:30:54.600504537Z" level=info msg="RemoveContainer for \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\"" Oct 2 19:30:54.602500 env[1559]: time="2023-10-02T19:30:54.602412360Z" level=error msg="RemoveContainer for \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\" failed" error="failed to set removing state for container \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\": container is already in removing state" Oct 2 19:30:54.602859 kubelet[2026]: E1002 19:30:54.602784 2026 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\": container is already in removing state" containerID="f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0" Oct 2 19:30:54.602969 kubelet[2026]: E1002 19:30:54.602862 2026 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0": container is already in removing state; Skipping pod "cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4)" Oct 2 19:30:54.603535 kubelet[2026]: E1002 19:30:54.603413 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4)\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:30:54.606522 env[1559]: time="2023-10-02T19:30:54.606418412Z" level=info msg="RemoveContainer for \"f7db4f5603f138fbf8bbf002e09892009241b292d1ac37a137ea2feb1cb465f0\" returns successfully" Oct 2 19:30:54.820088 kubelet[2026]: E1002 19:30:54.819944 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:54.836028 kubelet[2026]: E1002 19:30:54.835965 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:55.087989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2-rootfs.mount: Deactivated successfully. Oct 2 19:30:55.821088 kubelet[2026]: E1002 19:30:55.821013 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:56.821242 kubelet[2026]: E1002 19:30:56.821172 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:57.573334 kubelet[2026]: W1002 19:30:57.573272 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb467b741_6355_4bd6_900d_c17a09fedbd4.slice/cri-containerd-435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2.scope WatchSource:0}: task 435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2 not found: not found Oct 2 19:30:57.821905 kubelet[2026]: E1002 19:30:57.821826 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:58.823129 kubelet[2026]: E1002 19:30:58.823087 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:59.824102 kubelet[2026]: E1002 19:30:59.824014 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:59.836886 kubelet[2026]: E1002 19:30:59.836785 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:00.825014 kubelet[2026]: E1002 19:31:00.824971 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:01.826035 kubelet[2026]: E1002 19:31:01.825991 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:02.826891 kubelet[2026]: E1002 19:31:02.826821 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:03.827841 kubelet[2026]: E1002 19:31:03.827793 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.621374 kubelet[2026]: E1002 19:31:04.621319 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.659260 env[1559]: time="2023-10-02T19:31:04.659187373Z" level=info msg="StopPodSandbox for \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\"" Oct 2 19:31:04.660949 env[1559]: time="2023-10-02T19:31:04.659328445Z" level=info msg="TearDown network for sandbox \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\" successfully" Oct 2 19:31:04.660949 env[1559]: time="2023-10-02T19:31:04.659387557Z" level=info msg="StopPodSandbox for \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\" returns successfully" Oct 2 19:31:04.660949 env[1559]: time="2023-10-02T19:31:04.660149871Z" level=info msg="RemovePodSandbox for \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\"" Oct 2 19:31:04.660949 env[1559]: time="2023-10-02T19:31:04.660248331Z" level=info msg="Forcibly stopping sandbox \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\"" Oct 2 19:31:04.660949 env[1559]: time="2023-10-02T19:31:04.660424767Z" level=info msg="TearDown network for sandbox \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\" successfully" Oct 2 19:31:04.666353 env[1559]: time="2023-10-02T19:31:04.666239570Z" level=info msg="RemovePodSandbox \"59fadfb8e5fa86cbece91663acfe8734aac24d7d3ee62f3437a4a483b3c50b5e\" returns successfully" Oct 2 19:31:04.667455 env[1559]: time="2023-10-02T19:31:04.667162324Z" level=info msg="StopPodSandbox for \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\"" Oct 2 19:31:04.667455 env[1559]: time="2023-10-02T19:31:04.667291804Z" level=info msg="TearDown network for sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" successfully" Oct 2 19:31:04.667455 env[1559]: time="2023-10-02T19:31:04.667346020Z" level=info msg="StopPodSandbox for \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" returns successfully" Oct 2 19:31:04.668100 env[1559]: time="2023-10-02T19:31:04.668035230Z" level=info msg="RemovePodSandbox for \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\"" Oct 2 19:31:04.668205 env[1559]: time="2023-10-02T19:31:04.668115510Z" level=info msg="Forcibly stopping sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\"" Oct 2 19:31:04.668367 env[1559]: time="2023-10-02T19:31:04.668284182Z" level=info msg="TearDown network for sandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" successfully" Oct 2 19:31:04.673345 env[1559]: time="2023-10-02T19:31:04.673276456Z" level=info msg="RemovePodSandbox \"fd91683ff70757f5de0a1966f68801b556df81d3d0a1326960266f1bfdba1d8b\" returns successfully" Oct 2 19:31:04.829423 kubelet[2026]: E1002 19:31:04.829389 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.837914 kubelet[2026]: E1002 19:31:04.837872 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:05.830603 kubelet[2026]: E1002 19:31:05.830561 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:06.832134 kubelet[2026]: E1002 19:31:06.832069 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:07.069666 kubelet[2026]: E1002 19:31:07.069623 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4)\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:31:07.833047 kubelet[2026]: E1002 19:31:07.832978 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:08.833882 kubelet[2026]: E1002 19:31:08.833787 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:09.834207 kubelet[2026]: E1002 19:31:09.834136 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:09.839239 kubelet[2026]: E1002 19:31:09.839187 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:10.835330 kubelet[2026]: E1002 19:31:10.835261 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:11.835858 kubelet[2026]: E1002 19:31:11.835783 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:12.836840 kubelet[2026]: E1002 19:31:12.836767 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:13.837615 kubelet[2026]: E1002 19:31:13.837486 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:14.838023 kubelet[2026]: E1002 19:31:14.837988 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:14.840290 kubelet[2026]: E1002 19:31:14.840262 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:15.839566 kubelet[2026]: E1002 19:31:15.839493 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:16.840515 kubelet[2026]: E1002 19:31:16.840468 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:17.841323 kubelet[2026]: E1002 19:31:17.841225 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:18.842321 kubelet[2026]: E1002 19:31:18.842256 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:19.841548 kubelet[2026]: E1002 19:31:19.841503 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:19.842787 kubelet[2026]: E1002 19:31:19.842756 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:20.843593 kubelet[2026]: E1002 19:31:20.843520 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.844633 kubelet[2026]: E1002 19:31:21.844588 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:22.073040 env[1559]: time="2023-10-02T19:31:22.072966401Z" level=info msg="CreateContainer within sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:31:22.091443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010562196.mount: Deactivated successfully. Oct 2 19:31:22.102484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583543836.mount: Deactivated successfully. Oct 2 19:31:22.116932 env[1559]: time="2023-10-02T19:31:22.116868051Z" level=info msg="CreateContainer within sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2\"" Oct 2 19:31:22.118154 env[1559]: time="2023-10-02T19:31:22.118104965Z" level=info msg="StartContainer for \"6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2\"" Oct 2 19:31:22.165362 systemd[1]: Started cri-containerd-6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2.scope. Oct 2 19:31:22.200663 systemd[1]: cri-containerd-6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2.scope: Deactivated successfully. Oct 2 19:31:22.218108 env[1559]: time="2023-10-02T19:31:22.218033616Z" level=info msg="shim disconnected" id=6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2 Oct 2 19:31:22.218108 env[1559]: time="2023-10-02T19:31:22.218112288Z" level=warning msg="cleaning up after shim disconnected" id=6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2 namespace=k8s.io Oct 2 19:31:22.218450 env[1559]: time="2023-10-02T19:31:22.218134812Z" level=info msg="cleaning up dead shim" Oct 2 19:31:22.245407 env[1559]: time="2023-10-02T19:31:22.245314287Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3164 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:22.245878 env[1559]: time="2023-10-02T19:31:22.245779636Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Oct 2 19:31:22.247909 env[1559]: time="2023-10-02T19:31:22.247842368Z" level=error msg="Failed to pipe stderr of container \"6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2\"" error="reading from a closed fifo" Oct 2 19:31:22.248112 env[1559]: time="2023-10-02T19:31:22.248046632Z" level=error msg="Failed to pipe stdout of container \"6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2\"" error="reading from a closed fifo" Oct 2 19:31:22.250576 env[1559]: time="2023-10-02T19:31:22.250498069Z" level=error msg="StartContainer for \"6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:22.250847 kubelet[2026]: E1002 19:31:22.250812 2026 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2" Oct 2 19:31:22.251466 kubelet[2026]: E1002 19:31:22.251417 2026 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:22.251466 kubelet[2026]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:22.251466 kubelet[2026]: rm /hostbin/cilium-mount Oct 2 19:31:22.251466 kubelet[2026]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jc7hr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:22.251865 kubelet[2026]: E1002 19:31:22.251489 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:31:22.657358 kubelet[2026]: I1002 19:31:22.657313 2026 scope.go:115] "RemoveContainer" containerID="435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2" Oct 2 19:31:22.657888 kubelet[2026]: I1002 19:31:22.657856 2026 scope.go:115] "RemoveContainer" containerID="435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2" Oct 2 19:31:22.660314 env[1559]: time="2023-10-02T19:31:22.660231299Z" level=info msg="RemoveContainer for \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\"" Oct 2 19:31:22.661098 env[1559]: time="2023-10-02T19:31:22.661032048Z" level=info msg="RemoveContainer for \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\"" Oct 2 19:31:22.661245 env[1559]: time="2023-10-02T19:31:22.661157917Z" level=error msg="RemoveContainer for \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\" failed" error="failed to set removing state for container \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\": container is already in removing state" Oct 2 19:31:22.661471 kubelet[2026]: E1002 19:31:22.661405 2026 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\": container is already in removing state" containerID="435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2" Oct 2 19:31:22.661471 kubelet[2026]: E1002 19:31:22.661466 2026 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2": container is already in removing state; Skipping pod "cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4)" Oct 2 19:31:22.661938 kubelet[2026]: E1002 19:31:22.661900 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4)\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:31:22.666374 env[1559]: time="2023-10-02T19:31:22.666308626Z" level=info msg="RemoveContainer for \"435c6277e237b6677141f850a3202c040b7f55a3ca3403ca98e28688c22fd7f2\" returns successfully" Oct 2 19:31:22.845793 kubelet[2026]: E1002 19:31:22.845733 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:23.085936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2-rootfs.mount: Deactivated successfully. Oct 2 19:31:23.846808 kubelet[2026]: E1002 19:31:23.846766 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.621664 kubelet[2026]: E1002 19:31:24.621587 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.843399 kubelet[2026]: E1002 19:31:24.843334 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:24.848518 kubelet[2026]: E1002 19:31:24.848463 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:25.325080 kubelet[2026]: W1002 19:31:25.325012 2026 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb467b741_6355_4bd6_900d_c17a09fedbd4.slice/cri-containerd-6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2.scope WatchSource:0}: task 6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2 not found: not found Oct 2 19:31:25.849450 kubelet[2026]: E1002 19:31:25.849405 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:26.851028 kubelet[2026]: E1002 19:31:26.850962 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:27.851581 kubelet[2026]: E1002 19:31:27.851538 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:28.853155 kubelet[2026]: E1002 19:31:28.853096 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:29.844646 kubelet[2026]: E1002 19:31:29.844577 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:29.853799 kubelet[2026]: E1002 19:31:29.853774 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:30.855444 kubelet[2026]: E1002 19:31:30.855380 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:31.856614 kubelet[2026]: E1002 19:31:31.856540 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:32.857023 kubelet[2026]: E1002 19:31:32.856979 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.857924 kubelet[2026]: E1002 19:31:33.857884 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:34.845748 kubelet[2026]: E1002 19:31:34.845685 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:34.859157 kubelet[2026]: E1002 19:31:34.859113 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.859520 kubelet[2026]: E1002 19:31:35.859480 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:36.860833 kubelet[2026]: E1002 19:31:36.860779 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:37.861484 kubelet[2026]: E1002 19:31:37.861442 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:38.069946 kubelet[2026]: E1002 19:31:38.069906 2026 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-68sn9_kube-system(b467b741-6355-4bd6-900d-c17a09fedbd4)\"" pod="kube-system/cilium-68sn9" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 Oct 2 19:31:38.862293 kubelet[2026]: E1002 19:31:38.862241 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.836420 env[1559]: time="2023-10-02T19:31:39.836364345Z" level=info msg="StopPodSandbox for \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\"" Oct 2 19:31:39.837223 env[1559]: time="2023-10-02T19:31:39.837178930Z" level=info msg="Container to stop \"6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:31:39.839868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf-shm.mount: Deactivated successfully. Oct 2 19:31:39.846655 kubelet[2026]: E1002 19:31:39.846580 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:39.859672 systemd[1]: cri-containerd-1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf.scope: Deactivated successfully. Oct 2 19:31:39.858000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:31:39.863256 kernel: kauditd_printk_skb: 164 callbacks suppressed Oct 2 19:31:39.863369 kernel: audit: type=1334 audit(1696275099.858:809): prog-id=91 op=UNLOAD Oct 2 19:31:39.863549 kubelet[2026]: E1002 19:31:39.863184 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.867000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:31:39.872933 kernel: audit: type=1334 audit(1696275099.867:810): prog-id=94 op=UNLOAD Oct 2 19:31:39.873086 env[1559]: time="2023-10-02T19:31:39.873012964Z" level=info msg="StopContainer for \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\" with timeout 30 (s)" Oct 2 19:31:39.874131 env[1559]: time="2023-10-02T19:31:39.874080906Z" level=info msg="Stop container \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\" with signal terminated" Oct 2 19:31:39.922367 systemd[1]: cri-containerd-fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861.scope: Deactivated successfully. Oct 2 19:31:39.921000 audit: BPF prog-id=99 op=UNLOAD Oct 2 19:31:39.925000 audit: BPF prog-id=102 op=UNLOAD Oct 2 19:31:39.929097 kernel: audit: type=1334 audit(1696275099.921:811): prog-id=99 op=UNLOAD Oct 2 19:31:39.929212 kernel: audit: type=1334 audit(1696275099.925:812): prog-id=102 op=UNLOAD Oct 2 19:31:39.933209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf-rootfs.mount: Deactivated successfully. Oct 2 19:31:39.951814 env[1559]: time="2023-10-02T19:31:39.951737682Z" level=info msg="shim disconnected" id=1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf Oct 2 19:31:39.953880 env[1559]: time="2023-10-02T19:31:39.953804062Z" level=warning msg="cleaning up after shim disconnected" id=1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf namespace=k8s.io Oct 2 19:31:39.953880 env[1559]: time="2023-10-02T19:31:39.953863930Z" level=info msg="cleaning up dead shim" Oct 2 19:31:39.980242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861-rootfs.mount: Deactivated successfully. Oct 2 19:31:39.993236 env[1559]: time="2023-10-02T19:31:39.993171815Z" level=info msg="shim disconnected" id=fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861 Oct 2 19:31:39.993640 env[1559]: time="2023-10-02T19:31:39.993606779Z" level=warning msg="cleaning up after shim disconnected" id=fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861 namespace=k8s.io Oct 2 19:31:39.993866 env[1559]: time="2023-10-02T19:31:39.993822972Z" level=info msg="cleaning up dead shim" Oct 2 19:31:39.998624 env[1559]: time="2023-10-02T19:31:39.998550273Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3211 runtime=io.containerd.runc.v2\n" Oct 2 19:31:39.999178 env[1559]: time="2023-10-02T19:31:39.999121522Z" level=info msg="TearDown network for sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" successfully" Oct 2 19:31:39.999293 env[1559]: time="2023-10-02T19:31:39.999169726Z" level=info msg="StopPodSandbox for \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" returns successfully" Oct 2 19:31:40.028674 env[1559]: time="2023-10-02T19:31:40.028606860Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3230 runtime=io.containerd.runc.v2\n" Oct 2 19:31:40.031831 env[1559]: time="2023-10-02T19:31:40.031752810Z" level=info msg="StopContainer for \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\" returns successfully" Oct 2 19:31:40.032692 env[1559]: time="2023-10-02T19:31:40.032608940Z" level=info msg="StopPodSandbox for \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\"" Oct 2 19:31:40.032895 env[1559]: time="2023-10-02T19:31:40.032820968Z" level=info msg="Container to stop \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:31:40.035214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d-shm.mount: Deactivated successfully. Oct 2 19:31:40.055020 systemd[1]: cri-containerd-96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d.scope: Deactivated successfully. Oct 2 19:31:40.053000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:31:40.059756 kernel: audit: type=1334 audit(1696275100.053:813): prog-id=95 op=UNLOAD Oct 2 19:31:40.063000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:31:40.068833 kernel: audit: type=1334 audit(1696275100.063:814): prog-id=98 op=UNLOAD Oct 2 19:31:40.117119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d-rootfs.mount: Deactivated successfully. Oct 2 19:31:40.119618 kubelet[2026]: I1002 19:31:40.118085 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-host-proc-sys-kernel\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.119618 kubelet[2026]: I1002 19:31:40.118146 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-host-proc-sys-net\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.119618 kubelet[2026]: I1002 19:31:40.118195 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-ipsec-secrets\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.119618 kubelet[2026]: I1002 19:31:40.118240 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b467b741-6355-4bd6-900d-c17a09fedbd4-hubble-tls\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.119618 kubelet[2026]: I1002 19:31:40.118281 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-cgroup\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.119618 kubelet[2026]: I1002 19:31:40.118318 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-hostproc\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120186 kubelet[2026]: I1002 19:31:40.118356 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-run\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120186 kubelet[2026]: I1002 19:31:40.118399 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-xtables-lock\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120186 kubelet[2026]: I1002 19:31:40.118435 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-etc-cni-netd\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120186 kubelet[2026]: I1002 19:31:40.118474 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cni-path\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120186 kubelet[2026]: I1002 19:31:40.118518 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-config-path\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120186 kubelet[2026]: I1002 19:31:40.118586 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-bpf-maps\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120556 kubelet[2026]: I1002 19:31:40.118627 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-lib-modules\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120556 kubelet[2026]: I1002 19:31:40.118669 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b467b741-6355-4bd6-900d-c17a09fedbd4-clustermesh-secrets\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120556 kubelet[2026]: I1002 19:31:40.118746 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc7hr\" (UniqueName: \"kubernetes.io/projected/b467b741-6355-4bd6-900d-c17a09fedbd4-kube-api-access-jc7hr\") pod \"b467b741-6355-4bd6-900d-c17a09fedbd4\" (UID: \"b467b741-6355-4bd6-900d-c17a09fedbd4\") " Oct 2 19:31:40.120556 kubelet[2026]: I1002 19:31:40.119379 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.120556 kubelet[2026]: I1002 19:31:40.119440 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.120954 kubelet[2026]: I1002 19:31:40.119480 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.120954 kubelet[2026]: I1002 19:31:40.120209 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.120954 kubelet[2026]: I1002 19:31:40.120261 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-hostproc" (OuterVolumeSpecName: "hostproc") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.120954 kubelet[2026]: W1002 19:31:40.120475 2026 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/b467b741-6355-4bd6-900d-c17a09fedbd4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:31:40.124167 kubelet[2026]: I1002 19:31:40.121833 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.124167 kubelet[2026]: I1002 19:31:40.121934 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.124167 kubelet[2026]: I1002 19:31:40.122000 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cni-path" (OuterVolumeSpecName: "cni-path") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.124167 kubelet[2026]: I1002 19:31:40.122073 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.124167 kubelet[2026]: I1002 19:31:40.122182 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:31:40.129539 kubelet[2026]: I1002 19:31:40.129467 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:31:40.140910 kubelet[2026]: I1002 19:31:40.140832 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b467b741-6355-4bd6-900d-c17a09fedbd4-kube-api-access-jc7hr" (OuterVolumeSpecName: "kube-api-access-jc7hr") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "kube-api-access-jc7hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:40.141183 kubelet[2026]: I1002 19:31:40.140966 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:31:40.144883 kubelet[2026]: I1002 19:31:40.144830 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b467b741-6355-4bd6-900d-c17a09fedbd4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:40.146236 env[1559]: time="2023-10-02T19:31:40.145861489Z" level=info msg="shim disconnected" id=96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d Oct 2 19:31:40.146236 env[1559]: time="2023-10-02T19:31:40.145955785Z" level=warning msg="cleaning up after shim disconnected" id=96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d namespace=k8s.io Oct 2 19:31:40.146236 env[1559]: time="2023-10-02T19:31:40.145977709Z" level=info msg="cleaning up dead shim" Oct 2 19:31:40.150846 kubelet[2026]: I1002 19:31:40.150791 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b467b741-6355-4bd6-900d-c17a09fedbd4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b467b741-6355-4bd6-900d-c17a09fedbd4" (UID: "b467b741-6355-4bd6-900d-c17a09fedbd4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:31:40.173550 env[1559]: time="2023-10-02T19:31:40.173461168Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3266 runtime=io.containerd.runc.v2\n" Oct 2 19:31:40.174118 env[1559]: time="2023-10-02T19:31:40.174056285Z" level=info msg="TearDown network for sandbox \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\" successfully" Oct 2 19:31:40.174241 env[1559]: time="2023-10-02T19:31:40.174108305Z" level=info msg="StopPodSandbox for \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\" returns successfully" Oct 2 19:31:40.219289 kubelet[2026]: I1002 19:31:40.219247 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ee57062-b86a-48e6-a20c-c7979594f5fc-cilium-config-path\") pod \"5ee57062-b86a-48e6-a20c-c7979594f5fc\" (UID: \"5ee57062-b86a-48e6-a20c-c7979594f5fc\") " Oct 2 19:31:40.219587 kubelet[2026]: I1002 19:31:40.219543 2026 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gksd9\" (UniqueName: \"kubernetes.io/projected/5ee57062-b86a-48e6-a20c-c7979594f5fc-kube-api-access-gksd9\") pod \"5ee57062-b86a-48e6-a20c-c7979594f5fc\" (UID: \"5ee57062-b86a-48e6-a20c-c7979594f5fc\") " Oct 2 19:31:40.219801 kubelet[2026]: I1002 19:31:40.219776 2026 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-run\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.219951 kubelet[2026]: I1002 19:31:40.219930 2026 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-xtables-lock\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.220108 kubelet[2026]: W1002 19:31:40.220051 2026 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/5ee57062-b86a-48e6-a20c-c7979594f5fc/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:31:40.220236 kubelet[2026]: I1002 19:31:40.220216 2026 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-etc-cni-netd\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.220383 kubelet[2026]: I1002 19:31:40.220363 2026 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cni-path\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.220527 kubelet[2026]: I1002 19:31:40.220508 2026 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-config-path\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.220670 kubelet[2026]: I1002 19:31:40.220651 2026 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-bpf-maps\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.221433 kubelet[2026]: I1002 19:31:40.220808 2026 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-lib-modules\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.221433 kubelet[2026]: I1002 19:31:40.220841 2026 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b467b741-6355-4bd6-900d-c17a09fedbd4-clustermesh-secrets\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.221433 kubelet[2026]: I1002 19:31:40.220882 2026 reconciler.go:399] "Volume detached for volume \"kube-api-access-jc7hr\" (UniqueName: \"kubernetes.io/projected/b467b741-6355-4bd6-900d-c17a09fedbd4-kube-api-access-jc7hr\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.221433 kubelet[2026]: I1002 19:31:40.220924 2026 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-host-proc-sys-kernel\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.221433 kubelet[2026]: I1002 19:31:40.220950 2026 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-host-proc-sys-net\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.221433 kubelet[2026]: I1002 19:31:40.220973 2026 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b467b741-6355-4bd6-900d-c17a09fedbd4-hubble-tls\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.221433 kubelet[2026]: I1002 19:31:40.220997 2026 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-cgroup\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.221433 kubelet[2026]: I1002 19:31:40.221020 2026 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b467b741-6355-4bd6-900d-c17a09fedbd4-hostproc\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.222058 kubelet[2026]: I1002 19:31:40.221043 2026 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b467b741-6355-4bd6-900d-c17a09fedbd4-cilium-ipsec-secrets\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.225222 kubelet[2026]: I1002 19:31:40.225156 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ee57062-b86a-48e6-a20c-c7979594f5fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ee57062-b86a-48e6-a20c-c7979594f5fc" (UID: "5ee57062-b86a-48e6-a20c-c7979594f5fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:31:40.231302 kubelet[2026]: I1002 19:31:40.231252 2026 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee57062-b86a-48e6-a20c-c7979594f5fc-kube-api-access-gksd9" (OuterVolumeSpecName: "kube-api-access-gksd9") pod "5ee57062-b86a-48e6-a20c-c7979594f5fc" (UID: "5ee57062-b86a-48e6-a20c-c7979594f5fc"). InnerVolumeSpecName "kube-api-access-gksd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:31:40.321619 kubelet[2026]: I1002 19:31:40.321575 2026 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ee57062-b86a-48e6-a20c-c7979594f5fc-cilium-config-path\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.321869 kubelet[2026]: I1002 19:31:40.321848 2026 reconciler.go:399] "Volume detached for volume \"kube-api-access-gksd9\" (UniqueName: \"kubernetes.io/projected/5ee57062-b86a-48e6-a20c-c7979594f5fc-kube-api-access-gksd9\") on node \"172.31.27.68\" DevicePath \"\"" Oct 2 19:31:40.704875 kubelet[2026]: I1002 19:31:40.704837 2026 scope.go:115] "RemoveContainer" containerID="fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861" Oct 2 19:31:40.707083 env[1559]: time="2023-10-02T19:31:40.707019510Z" level=info msg="RemoveContainer for \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\"" Oct 2 19:31:40.711682 env[1559]: time="2023-10-02T19:31:40.711608738Z" level=info msg="RemoveContainer for \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\" returns successfully" Oct 2 19:31:40.714258 kubelet[2026]: I1002 19:31:40.712678 2026 scope.go:115] "RemoveContainer" containerID="fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861" Oct 2 19:31:40.713791 systemd[1]: Removed slice kubepods-besteffort-pod5ee57062_b86a_48e6_a20c_c7979594f5fc.slice. Oct 2 19:31:40.714834 env[1559]: time="2023-10-02T19:31:40.713496174Z" level=error msg="ContainerStatus for \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\": not found" Oct 2 19:31:40.715173 kubelet[2026]: E1002 19:31:40.715129 2026 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\": not found" containerID="fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861" Oct 2 19:31:40.715273 kubelet[2026]: I1002 19:31:40.715208 2026 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861} err="failed to get container status \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\": rpc error: code = NotFound desc = an error occurred when try to find container \"fec3d282b1f54569dc9ca5679377013bb1b8f15c74fd6af2bb5645a061bbb861\": not found" Oct 2 19:31:40.715273 kubelet[2026]: I1002 19:31:40.715234 2026 scope.go:115] "RemoveContainer" containerID="6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2" Oct 2 19:31:40.717981 env[1559]: time="2023-10-02T19:31:40.717914558Z" level=info msg="RemoveContainer for \"6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2\"" Oct 2 19:31:40.723285 env[1559]: time="2023-10-02T19:31:40.723199824Z" level=info msg="RemoveContainer for \"6fc400bbd7491cbbdfa3582ad9c7840985b8189cb4896605aa0716385c13fce2\" returns successfully" Oct 2 19:31:40.725523 systemd[1]: Removed slice kubepods-burstable-podb467b741_6355_4bd6_900d_c17a09fedbd4.slice. Oct 2 19:31:40.839609 systemd[1]: var-lib-kubelet-pods-b467b741\x2d6355\x2d4bd6\x2d900d\x2dc17a09fedbd4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djc7hr.mount: Deactivated successfully. Oct 2 19:31:40.839804 systemd[1]: var-lib-kubelet-pods-5ee57062\x2db86a\x2d48e6\x2da20c\x2dc7979594f5fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgksd9.mount: Deactivated successfully. Oct 2 19:31:40.839941 systemd[1]: var-lib-kubelet-pods-b467b741\x2d6355\x2d4bd6\x2d900d\x2dc17a09fedbd4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:31:40.840074 systemd[1]: var-lib-kubelet-pods-b467b741\x2d6355\x2d4bd6\x2d900d\x2dc17a09fedbd4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:31:40.840209 systemd[1]: var-lib-kubelet-pods-b467b741\x2d6355\x2d4bd6\x2d900d\x2dc17a09fedbd4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:31:40.864772 kubelet[2026]: E1002 19:31:40.864721 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:41.076214 kubelet[2026]: I1002 19:31:41.075325 2026 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5ee57062-b86a-48e6-a20c-c7979594f5fc path="/var/lib/kubelet/pods/5ee57062-b86a-48e6-a20c-c7979594f5fc/volumes" Oct 2 19:31:41.077865 kubelet[2026]: I1002 19:31:41.077838 2026 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b467b741-6355-4bd6-900d-c17a09fedbd4 path="/var/lib/kubelet/pods/b467b741-6355-4bd6-900d-c17a09fedbd4/volumes" Oct 2 19:31:41.865788 kubelet[2026]: E1002 19:31:41.865743 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:42.867033 kubelet[2026]: E1002 19:31:42.866975 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:43.867436 kubelet[2026]: E1002 19:31:43.867393 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.622003 kubelet[2026]: E1002 19:31:44.621927 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.847981 kubelet[2026]: E1002 19:31:44.847937 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:44.868448 kubelet[2026]: E1002 19:31:44.868414 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:45.869520 kubelet[2026]: E1002 19:31:45.869475 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:46.870767 kubelet[2026]: E1002 19:31:46.870727 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:47.872232 kubelet[2026]: E1002 19:31:47.872179 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:48.396437 amazon-ssm-agent[1540]: 2023-10-02 19:31:48 INFO Backing off health check to every 600 seconds for 1800 seconds. Oct 2 19:31:48.497477 amazon-ssm-agent[1540]: 2023-10-02 19:31:48 ERROR Health ping failed with error - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-05dd446d39e817e56 is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-05dd446d39e817e56 because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:31:48.497477 amazon-ssm-agent[1540]: status code: 400, request id: 31507f33-a5ff-43e5-8b90-0fc732ef1ea1 Oct 2 19:31:48.872935 kubelet[2026]: E1002 19:31:48.872895 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:49.849339 kubelet[2026]: E1002 19:31:49.849304 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:49.874087 kubelet[2026]: E1002 19:31:49.874062 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:50.875640 kubelet[2026]: E1002 19:31:50.875576 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:51.876119 kubelet[2026]: E1002 19:31:51.876075 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:52.877155 kubelet[2026]: E1002 19:31:52.877086 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:53.877330 kubelet[2026]: E1002 19:31:53.877265 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.056740 kubelet[2026]: E1002 19:31:54.056636 2026 controller.go:187] failed to update lease, error: Put "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:31:54.851270 kubelet[2026]: E1002 19:31:54.851184 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:54.877936 kubelet[2026]: E1002 19:31:54.877861 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.879005 kubelet[2026]: E1002 19:31:55.878965 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:56.879972 kubelet[2026]: E1002 19:31:56.879908 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:57.880259 kubelet[2026]: E1002 19:31:57.880220 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:58.881770 kubelet[2026]: E1002 19:31:58.881679 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:59.852226 kubelet[2026]: E1002 19:31:59.852153 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:59.882956 kubelet[2026]: E1002 19:31:59.882886 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.883222 kubelet[2026]: E1002 19:32:00.883157 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:01.884287 kubelet[2026]: E1002 19:32:01.884214 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:02.884666 kubelet[2026]: E1002 19:32:02.884598 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:03.885276 kubelet[2026]: E1002 19:32:03.885206 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.057669 kubelet[2026]: E1002 19:32:04.057521 2026 controller.go:187] failed to update lease, error: Put "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:32:04.622071 kubelet[2026]: E1002 19:32:04.621973 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.677110 env[1559]: time="2023-10-02T19:32:04.677055347Z" level=info msg="StopPodSandbox for \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\"" Oct 2 19:32:04.677758 env[1559]: time="2023-10-02T19:32:04.677194104Z" level=info msg="TearDown network for sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" successfully" Oct 2 19:32:04.677758 env[1559]: time="2023-10-02T19:32:04.677251428Z" level=info msg="StopPodSandbox for \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" returns successfully" Oct 2 19:32:04.677939 env[1559]: time="2023-10-02T19:32:04.677868709Z" level=info msg="RemovePodSandbox for \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\"" Oct 2 19:32:04.678034 env[1559]: time="2023-10-02T19:32:04.677948629Z" level=info msg="Forcibly stopping sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\"" Oct 2 19:32:04.678183 env[1559]: time="2023-10-02T19:32:04.678124309Z" level=info msg="TearDown network for sandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" successfully" Oct 2 19:32:04.682965 env[1559]: time="2023-10-02T19:32:04.682899910Z" level=info msg="RemovePodSandbox \"1e60682ab7acebb27158273c574b8ca45a2925bc631509d2fee28803a65bb7bf\" returns successfully" Oct 2 19:32:04.683531 env[1559]: time="2023-10-02T19:32:04.683490743Z" level=info msg="StopPodSandbox for \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\"" Oct 2 19:32:04.683889 env[1559]: time="2023-10-02T19:32:04.683825952Z" level=info msg="TearDown network for sandbox \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\" successfully" Oct 2 19:32:04.684031 env[1559]: time="2023-10-02T19:32:04.683997612Z" level=info msg="StopPodSandbox for \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\" returns successfully" Oct 2 19:32:04.684584 env[1559]: time="2023-10-02T19:32:04.684546349Z" level=info msg="RemovePodSandbox for \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\"" Oct 2 19:32:04.684787 env[1559]: time="2023-10-02T19:32:04.684732614Z" level=info msg="Forcibly stopping sandbox \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\"" Oct 2 19:32:04.684978 env[1559]: time="2023-10-02T19:32:04.684943382Z" level=info msg="TearDown network for sandbox \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\" successfully" Oct 2 19:32:04.689232 env[1559]: time="2023-10-02T19:32:04.689183062Z" level=info msg="RemovePodSandbox \"96a3997d1695e65a6e10da982c5c0be5da173c5ed384a47e4c98e7faa7c01d3d\" returns successfully" Oct 2 19:32:04.703149 kubelet[2026]: W1002 19:32:04.703096 2026 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:32:04.853355 kubelet[2026]: E1002 19:32:04.853310 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:04.890375 kubelet[2026]: E1002 19:32:04.886154 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:05.887572 kubelet[2026]: E1002 19:32:05.887533 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:06.888758 kubelet[2026]: E1002 19:32:06.888692 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.889684 kubelet[2026]: E1002 19:32:07.889639 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:08.890896 kubelet[2026]: E1002 19:32:08.890856 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:09.854995 kubelet[2026]: E1002 19:32:09.854962 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:09.892044 kubelet[2026]: E1002 19:32:09.892012 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.893195 kubelet[2026]: E1002 19:32:10.893152 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:11.894413 kubelet[2026]: E1002 19:32:11.894353 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:12.895512 kubelet[2026]: E1002 19:32:12.895449 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.895856 kubelet[2026]: E1002 19:32:13.895814 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.058631 kubelet[2026]: E1002 19:32:14.058568 2026 controller.go:187] failed to update lease, error: Put "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 19:32:14.856930 kubelet[2026]: E1002 19:32:14.856895 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:14.897478 kubelet[2026]: E1002 19:32:14.897438 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.539497 kubelet[2026]: E1002 19:32:15.539440 2026 controller.go:187] failed to update lease, error: Put "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": unexpected EOF Oct 2 19:32:15.540080 kubelet[2026]: E1002 19:32:15.540015 2026 controller.go:187] failed to update lease, error: Put "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": dial tcp 172.31.17.33:6443: connect: connection refused Oct 2 19:32:15.540080 kubelet[2026]: I1002 19:32:15.540067 2026 controller.go:114] failed to update lease using latest lease, fallback to ensure lease, err: failed 5 attempts to update lease Oct 2 19:32:15.540658 kubelet[2026]: E1002 19:32:15.540600 2026 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": dial tcp 172.31.17.33:6443: connect: connection refused Oct 2 19:32:15.742224 kubelet[2026]: E1002 19:32:15.742157 2026 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": dial tcp 172.31.17.33:6443: connect: connection refused Oct 2 19:32:15.898786 kubelet[2026]: E1002 19:32:15.898607 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:16.143309 kubelet[2026]: E1002 19:32:16.143259 2026 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": dial tcp 172.31.17.33:6443: connect: connection refused Oct 2 19:32:16.899549 kubelet[2026]: E1002 19:32:16.899484 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:17.900030 kubelet[2026]: E1002 19:32:17.899986 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.901782 kubelet[2026]: E1002 19:32:18.901737 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:19.858386 kubelet[2026]: E1002 19:32:19.858340 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:19.902932 kubelet[2026]: E1002 19:32:19.902881 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:20.903792 kubelet[2026]: E1002 19:32:20.903696 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:21.904434 kubelet[2026]: E1002 19:32:21.904367 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:22.905084 kubelet[2026]: E1002 19:32:22.905025 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:23.905168 kubelet[2026]: E1002 19:32:23.905131 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.622113 kubelet[2026]: E1002 19:32:24.622066 2026 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.859184 kubelet[2026]: E1002 19:32:24.859143 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:24.907294 kubelet[2026]: E1002 19:32:24.906874 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.907273 kubelet[2026]: E1002 19:32:25.907210 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:26.907424 kubelet[2026]: E1002 19:32:26.907338 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:26.944487 kubelet[2026]: E1002 19:32:26.944448 2026 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.17.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.68?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Oct 2 19:32:27.907566 kubelet[2026]: E1002 19:32:27.907492 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:28.908663 kubelet[2026]: E1002 19:32:28.908623 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:29.860690 kubelet[2026]: E1002 19:32:29.860634 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:29.909433 kubelet[2026]: E1002 19:32:29.909378 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:30.910489 kubelet[2026]: E1002 19:32:30.910418 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.910956 kubelet[2026]: E1002 19:32:31.910888 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:32.912080 kubelet[2026]: E1002 19:32:32.912002 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:33.758233 kubelet[2026]: E1002 19:32:33.758188 2026 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"172.31.27.68\": Get \"https://172.31.17.33:6443/api/v1/nodes/172.31.27.68?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 2 19:32:33.912482 kubelet[2026]: E1002 19:32:33.912439 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:34.861731 kubelet[2026]: E1002 19:32:34.861667 2026 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:34.913607 kubelet[2026]: E1002 19:32:34.913546 2026 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"