Oct 2 20:38:29.032775 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 20:38:29.032794 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 20:38:29.032802 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Oct 2 20:38:29.032809 kernel: printk: bootconsole [pl11] enabled Oct 2 20:38:29.032813 kernel: efi: EFI v2.70 by EDK II Oct 2 20:38:29.032819 kernel: efi: ACPI 2.0=0x3fd8d018 SMBIOS=0x3fd6a000 SMBIOS 3.0=0x3fd68000 MEMATTR=0x3ef3f098 RNG=0x3fd8d998 MEMRESERVE=0x37eb7f98 Oct 2 20:38:29.032825 kernel: random: crng init done Oct 2 20:38:29.032831 kernel: ACPI: Early table checksum verification disabled Oct 2 20:38:29.032836 kernel: ACPI: RSDP 0x000000003FD8D018 000024 (v02 VRTUAL) Oct 2 20:38:29.032841 kernel: ACPI: XSDT 0x000000003FD8DF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032847 kernel: ACPI: FACP 0x000000003FD8DC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032853 kernel: ACPI: DSDT 0x000000003EBD6018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Oct 2 20:38:29.032859 kernel: ACPI: DBG2 0x000000003FD8DB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032864 kernel: ACPI: GTDT 0x000000003FD8DD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032871 kernel: ACPI: OEM0 0x000000003FD8D098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032876 kernel: ACPI: SPCR 0x000000003FD8DA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032882 kernel: ACPI: APIC 0x000000003FD8D818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032889 kernel: ACPI: SRAT 0x000000003FD8D198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032895 kernel: ACPI: PPTT 0x000000003FD8D418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Oct 2 20:38:29.032900 kernel: ACPI: BGRT 0x000000003FD8DE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Oct 2 20:38:29.032906 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Oct 2 20:38:29.032912 kernel: NUMA: Failed to initialise from firmware Oct 2 20:38:29.032917 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Oct 2 20:38:29.032923 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Oct 2 20:38:29.032929 kernel: Zone ranges: Oct 2 20:38:29.032934 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Oct 2 20:38:29.032940 kernel: DMA32 empty Oct 2 20:38:29.032946 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Oct 2 20:38:29.032952 kernel: Movable zone start for each node Oct 2 20:38:29.032957 kernel: Early memory node ranges Oct 2 20:38:29.032963 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Oct 2 20:38:29.032969 kernel: node 0: [mem 0x0000000000824000-0x000000003ec84fff] Oct 2 20:38:29.032974 kernel: node 0: [mem 0x000000003ec85000-0x000000003ecadfff] Oct 2 20:38:29.032980 kernel: node 0: [mem 0x000000003ecae000-0x000000003fd2dfff] Oct 2 20:38:29.032985 kernel: node 0: [mem 0x000000003fd2e000-0x000000003fd81fff] Oct 2 20:38:29.032991 kernel: node 0: [mem 0x000000003fd82000-0x000000003fd8dfff] Oct 2 20:38:29.032997 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fd91fff] Oct 2 20:38:29.033002 kernel: node 0: [mem 0x000000003fd92000-0x000000003fffffff] Oct 2 20:38:29.033008 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Oct 2 20:38:29.033014 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Oct 2 20:38:29.033023 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Oct 2 20:38:29.033029 kernel: psci: probing for conduit method from ACPI. Oct 2 20:38:29.033035 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 20:38:29.033041 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 20:38:29.033048 kernel: psci: MIGRATE_INFO_TYPE not supported. Oct 2 20:38:29.033054 kernel: psci: SMC Calling Convention v1.4 Oct 2 20:38:29.044746 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Oct 2 20:38:29.044770 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Oct 2 20:38:29.044777 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 20:38:29.044784 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 20:38:29.044791 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 2 20:38:29.044797 kernel: Detected PIPT I-cache on CPU0 Oct 2 20:38:29.044803 kernel: CPU features: detected: GIC system register CPU interface Oct 2 20:38:29.044809 kernel: CPU features: detected: Hardware dirty bit management Oct 2 20:38:29.044815 kernel: CPU features: detected: Spectre-BHB Oct 2 20:38:29.044822 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 20:38:29.044832 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 20:38:29.044838 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 20:38:29.044844 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Oct 2 20:38:29.044850 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Oct 2 20:38:29.044862 kernel: Policy zone: Normal Oct 2 20:38:29.044870 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 20:38:29.044877 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 20:38:29.044884 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 20:38:29.044890 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 20:38:29.044896 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 20:38:29.044904 kernel: software IO TLB: mapped [mem 0x000000003abd6000-0x000000003ebd6000] (64MB) Oct 2 20:38:29.044911 kernel: Memory: 3992064K/4194160K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 202096K reserved, 0K cma-reserved) Oct 2 20:38:29.044917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 20:38:29.044923 kernel: trace event string verifier disabled Oct 2 20:38:29.044929 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 20:38:29.044936 kernel: rcu: RCU event tracing is enabled. Oct 2 20:38:29.044942 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 20:38:29.044948 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 20:38:29.044954 kernel: Tracing variant of Tasks RCU enabled. Oct 2 20:38:29.044961 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 20:38:29.044967 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 20:38:29.044974 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 20:38:29.044980 kernel: GICv3: 960 SPIs implemented Oct 2 20:38:29.044986 kernel: GICv3: 0 Extended SPIs implemented Oct 2 20:38:29.044992 kernel: GICv3: Distributor has no Range Selector support Oct 2 20:38:29.044998 kernel: Root IRQ handler: gic_handle_irq Oct 2 20:38:29.045004 kernel: GICv3: 16 PPIs implemented Oct 2 20:38:29.045010 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Oct 2 20:38:29.045016 kernel: ITS: No ITS available, not enabling LPIs Oct 2 20:38:29.045023 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 20:38:29.045029 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 20:38:29.045035 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 20:38:29.045041 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 20:38:29.045049 kernel: Console: colour dummy device 80x25 Oct 2 20:38:29.045055 kernel: printk: console [tty1] enabled Oct 2 20:38:29.045079 kernel: ACPI: Core revision 20210730 Oct 2 20:38:29.045086 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 20:38:29.045093 kernel: pid_max: default: 32768 minimum: 301 Oct 2 20:38:29.045099 kernel: LSM: Security Framework initializing Oct 2 20:38:29.045105 kernel: SELinux: Initializing. Oct 2 20:38:29.045112 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 20:38:29.045118 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 20:38:29.045127 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Oct 2 20:38:29.045133 kernel: Hyper-V: Host Build 10.0.22477.1341-1-0 Oct 2 20:38:29.045139 kernel: rcu: Hierarchical SRCU implementation. Oct 2 20:38:29.045146 kernel: Remapping and enabling EFI services. Oct 2 20:38:29.045152 kernel: smp: Bringing up secondary CPUs ... Oct 2 20:38:29.045158 kernel: Detected PIPT I-cache on CPU1 Oct 2 20:38:29.045165 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Oct 2 20:38:29.045171 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 20:38:29.045177 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 20:38:29.045185 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 20:38:29.045191 kernel: SMP: Total of 2 processors activated. Oct 2 20:38:29.045197 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 20:38:29.045204 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Oct 2 20:38:29.045210 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 20:38:29.045217 kernel: CPU features: detected: CRC32 instructions Oct 2 20:38:29.045223 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 20:38:29.045229 kernel: CPU features: detected: LSE atomic instructions Oct 2 20:38:29.045236 kernel: CPU features: detected: Privileged Access Never Oct 2 20:38:29.045243 kernel: CPU: All CPU(s) started at EL1 Oct 2 20:38:29.045250 kernel: alternatives: patching kernel code Oct 2 20:38:29.045260 kernel: devtmpfs: initialized Oct 2 20:38:29.045268 kernel: KASLR enabled Oct 2 20:38:29.045274 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 20:38:29.045281 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 20:38:29.045288 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 20:38:29.045295 kernel: SMBIOS 3.1.0 present. Oct 2 20:38:29.045301 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 05/16/2022 Oct 2 20:38:29.045309 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 20:38:29.045317 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 20:38:29.045324 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 20:38:29.045330 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 20:38:29.045337 kernel: audit: initializing netlink subsys (disabled) Oct 2 20:38:29.045344 kernel: audit: type=2000 audit(0.086:1): state=initialized audit_enabled=0 res=1 Oct 2 20:38:29.045350 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 20:38:29.045357 kernel: cpuidle: using governor menu Oct 2 20:38:29.045365 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 20:38:29.045372 kernel: ASID allocator initialised with 32768 entries Oct 2 20:38:29.045378 kernel: ACPI: bus type PCI registered Oct 2 20:38:29.045385 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 20:38:29.045392 kernel: Serial: AMBA PL011 UART driver Oct 2 20:38:29.045398 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 20:38:29.045405 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 20:38:29.045411 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 20:38:29.045418 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 20:38:29.045426 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 20:38:29.045432 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 20:38:29.045439 kernel: ACPI: Added _OSI(Module Device) Oct 2 20:38:29.045446 kernel: ACPI: Added _OSI(Processor Device) Oct 2 20:38:29.045452 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 20:38:29.045459 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 20:38:29.045465 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 20:38:29.045472 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 20:38:29.045478 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 20:38:29.045486 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 20:38:29.045493 kernel: ACPI: Interpreter enabled Oct 2 20:38:29.045499 kernel: ACPI: Using GIC for interrupt routing Oct 2 20:38:29.045506 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Oct 2 20:38:29.045512 kernel: printk: console [ttyAMA0] enabled Oct 2 20:38:29.045519 kernel: printk: bootconsole [pl11] disabled Oct 2 20:38:29.045526 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Oct 2 20:38:29.045532 kernel: iommu: Default domain type: Translated Oct 2 20:38:29.045539 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 20:38:29.045546 kernel: vgaarb: loaded Oct 2 20:38:29.045553 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 20:38:29.045560 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 20:38:29.045566 kernel: PTP clock support registered Oct 2 20:38:29.045573 kernel: Registered efivars operations Oct 2 20:38:29.045580 kernel: No ACPI PMU IRQ for CPU0 Oct 2 20:38:29.045586 kernel: No ACPI PMU IRQ for CPU1 Oct 2 20:38:29.045592 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 20:38:29.045599 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 20:38:29.045607 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 20:38:29.045613 kernel: pnp: PnP ACPI init Oct 2 20:38:29.045620 kernel: pnp: PnP ACPI: found 0 devices Oct 2 20:38:29.045626 kernel: NET: Registered PF_INET protocol family Oct 2 20:38:29.045633 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 20:38:29.045640 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 20:38:29.045646 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 20:38:29.045653 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 20:38:29.045660 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 20:38:29.045668 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 20:38:29.045674 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 20:38:29.045681 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 20:38:29.045688 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 20:38:29.045694 kernel: PCI: CLS 0 bytes, default 64 Oct 2 20:38:29.045701 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Oct 2 20:38:29.045707 kernel: kvm [1]: HYP mode not available Oct 2 20:38:29.045714 kernel: Initialise system trusted keyrings Oct 2 20:38:29.045720 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 20:38:29.045728 kernel: Key type asymmetric registered Oct 2 20:38:29.045735 kernel: Asymmetric key parser 'x509' registered Oct 2 20:38:29.045741 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 20:38:29.045748 kernel: io scheduler mq-deadline registered Oct 2 20:38:29.045754 kernel: io scheduler kyber registered Oct 2 20:38:29.045761 kernel: io scheduler bfq registered Oct 2 20:38:29.045768 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 20:38:29.045774 kernel: thunder_xcv, ver 1.0 Oct 2 20:38:29.045781 kernel: thunder_bgx, ver 1.0 Oct 2 20:38:29.045788 kernel: nicpf, ver 1.0 Oct 2 20:38:29.045795 kernel: nicvf, ver 1.0 Oct 2 20:38:29.045968 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 20:38:29.046031 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T20:38:28 UTC (1696279108) Oct 2 20:38:29.046040 kernel: efifb: probing for efifb Oct 2 20:38:29.046048 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Oct 2 20:38:29.046054 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Oct 2 20:38:29.046073 kernel: efifb: scrolling: redraw Oct 2 20:38:29.046084 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 2 20:38:29.046091 kernel: Console: switching to colour frame buffer device 128x48 Oct 2 20:38:29.046097 kernel: fb0: EFI VGA frame buffer device Oct 2 20:38:29.046104 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Oct 2 20:38:29.046111 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 20:38:29.046117 kernel: NET: Registered PF_INET6 protocol family Oct 2 20:38:29.046124 kernel: Segment Routing with IPv6 Oct 2 20:38:29.046130 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 20:38:29.046137 kernel: NET: Registered PF_PACKET protocol family Oct 2 20:38:29.046145 kernel: Key type dns_resolver registered Oct 2 20:38:29.046152 kernel: registered taskstats version 1 Oct 2 20:38:29.046158 kernel: Loading compiled-in X.509 certificates Oct 2 20:38:29.046165 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 20:38:29.046172 kernel: Key type .fscrypt registered Oct 2 20:38:29.046179 kernel: Key type fscrypt-provisioning registered Oct 2 20:38:29.046185 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 20:38:29.046192 kernel: ima: Allocated hash algorithm: sha1 Oct 2 20:38:29.046199 kernel: ima: No architecture policies found Oct 2 20:38:29.046206 kernel: Freeing unused kernel memory: 34560K Oct 2 20:38:29.046213 kernel: Run /init as init process Oct 2 20:38:29.046220 kernel: with arguments: Oct 2 20:38:29.046226 kernel: /init Oct 2 20:38:29.046232 kernel: with environment: Oct 2 20:38:29.046238 kernel: HOME=/ Oct 2 20:38:29.046245 kernel: TERM=linux Oct 2 20:38:29.046251 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 20:38:29.046260 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:38:29.046270 systemd[1]: Detected virtualization microsoft. Oct 2 20:38:29.046278 systemd[1]: Detected architecture arm64. Oct 2 20:38:29.046285 systemd[1]: Running in initrd. Oct 2 20:38:29.046291 systemd[1]: No hostname configured, using default hostname. Oct 2 20:38:29.046298 systemd[1]: Hostname set to . Oct 2 20:38:29.046306 systemd[1]: Initializing machine ID from random generator. Oct 2 20:38:29.046312 systemd[1]: Queued start job for default target initrd.target. Oct 2 20:38:29.046321 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:38:29.046328 systemd[1]: Reached target cryptsetup.target. Oct 2 20:38:29.046335 systemd[1]: Reached target paths.target. Oct 2 20:38:29.046342 systemd[1]: Reached target slices.target. Oct 2 20:38:29.046349 systemd[1]: Reached target swap.target. Oct 2 20:38:29.046356 systemd[1]: Reached target timers.target. Oct 2 20:38:29.046363 systemd[1]: Listening on iscsid.socket. Oct 2 20:38:29.046370 systemd[1]: Listening on iscsiuio.socket. Oct 2 20:38:29.046378 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 20:38:29.046386 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 20:38:29.046393 systemd[1]: Listening on systemd-journald.socket. Oct 2 20:38:29.046400 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:38:29.046407 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:38:29.046414 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:38:29.046421 systemd[1]: Reached target sockets.target. Oct 2 20:38:29.046428 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:38:29.046435 systemd[1]: Finished network-cleanup.service. Oct 2 20:38:29.046443 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 20:38:29.046450 systemd[1]: Starting systemd-journald.service... Oct 2 20:38:29.046457 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:38:29.046464 systemd[1]: Starting systemd-resolved.service... Oct 2 20:38:29.046471 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 20:38:29.046482 systemd-journald[276]: Journal started Oct 2 20:38:29.046523 systemd-journald[276]: Runtime Journal (/run/log/journal/2fc27a94048347908ae17404892e421a) is 8.0M, max 78.6M, 70.6M free. Oct 2 20:38:29.031102 systemd-modules-load[277]: Inserted module 'overlay' Oct 2 20:38:29.074084 systemd[1]: Started systemd-journald.service. Oct 2 20:38:29.074123 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 20:38:29.066797 systemd-resolved[278]: Positive Trust Anchors: Oct 2 20:38:29.113156 kernel: audit: type=1130 audit(1696279109.078:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.113179 kernel: Bridge firewalling registered Oct 2 20:38:29.113188 kernel: SCSI subsystem initialized Oct 2 20:38:29.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.066811 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:38:29.168605 kernel: audit: type=1130 audit(1696279109.117:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.168629 kernel: audit: type=1130 audit(1696279109.136:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.168638 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 20:38:29.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.066838 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:38:29.214470 kernel: audit: type=1130 audit(1696279109.172:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.069806 systemd-resolved[278]: Defaulting to hostname 'linux'. Oct 2 20:38:29.251336 kernel: device-mapper: uevent: version 1.0.3 Oct 2 20:38:29.251357 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 20:38:29.251366 kernel: audit: type=1130 audit(1696279109.224:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.097771 systemd[1]: Started systemd-resolved.service. Oct 2 20:38:29.101201 systemd-modules-load[277]: Inserted module 'br_netfilter' Oct 2 20:38:29.117436 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:38:29.136515 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 20:38:29.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.216362 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 20:38:29.245084 systemd[1]: Reached target nss-lookup.target. Oct 2 20:38:29.362669 kernel: audit: type=1130 audit(1696279109.284:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.362690 kernel: audit: type=1130 audit(1696279109.314:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.362703 kernel: audit: type=1130 audit(1696279109.342:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.250342 systemd-modules-load[277]: Inserted module 'dm_multipath' Oct 2 20:38:29.258985 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 20:38:29.267988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:38:29.394828 kernel: audit: type=1130 audit(1696279109.376:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.279534 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:38:29.399149 dracut-cmdline[297]: dracut-dracut-053 Oct 2 20:38:29.303424 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:38:29.408037 dracut-cmdline[297]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 20:38:29.315294 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:38:29.338188 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 20:38:29.343233 systemd[1]: Starting dracut-cmdline.service... Oct 2 20:38:29.364973 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:38:29.539084 kernel: Loading iSCSI transport class v2.0-870. Oct 2 20:38:29.550081 kernel: iscsi: registered transport (tcp) Oct 2 20:38:29.568929 kernel: iscsi: registered transport (qla4xxx) Oct 2 20:38:29.568962 kernel: QLogic iSCSI HBA Driver Oct 2 20:38:29.642839 systemd[1]: Finished dracut-cmdline.service. Oct 2 20:38:29.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:29.648488 systemd[1]: Starting dracut-pre-udev.service... Oct 2 20:38:29.711088 kernel: raid6: neonx8 gen() 13823 MB/s Oct 2 20:38:29.731071 kernel: raid6: neonx8 xor() 10837 MB/s Oct 2 20:38:29.751071 kernel: raid6: neonx4 gen() 13535 MB/s Oct 2 20:38:29.772071 kernel: raid6: neonx4 xor() 11297 MB/s Oct 2 20:38:29.792074 kernel: raid6: neonx2 gen() 12955 MB/s Oct 2 20:38:29.812075 kernel: raid6: neonx2 xor() 10238 MB/s Oct 2 20:38:29.833071 kernel: raid6: neonx1 gen() 10514 MB/s Oct 2 20:38:29.853075 kernel: raid6: neonx1 xor() 8798 MB/s Oct 2 20:38:29.873074 kernel: raid6: int64x8 gen() 6294 MB/s Oct 2 20:38:29.894071 kernel: raid6: int64x8 xor() 3548 MB/s Oct 2 20:38:29.914075 kernel: raid6: int64x4 gen() 7242 MB/s Oct 2 20:38:29.934074 kernel: raid6: int64x4 xor() 3857 MB/s Oct 2 20:38:29.955072 kernel: raid6: int64x2 gen() 6145 MB/s Oct 2 20:38:29.975075 kernel: raid6: int64x2 xor() 3324 MB/s Oct 2 20:38:29.995071 kernel: raid6: int64x1 gen() 5046 MB/s Oct 2 20:38:30.019584 kernel: raid6: int64x1 xor() 2646 MB/s Oct 2 20:38:30.019605 kernel: raid6: using algorithm neonx8 gen() 13823 MB/s Oct 2 20:38:30.019621 kernel: raid6: .... xor() 10837 MB/s, rmw enabled Oct 2 20:38:30.023446 kernel: raid6: using neon recovery algorithm Oct 2 20:38:30.040073 kernel: xor: measuring software checksum speed Oct 2 20:38:30.044070 kernel: 8regs : 17304 MB/sec Oct 2 20:38:30.051629 kernel: 32regs : 20749 MB/sec Oct 2 20:38:30.051648 kernel: arm64_neon : 27901 MB/sec Oct 2 20:38:30.051664 kernel: xor: using function: arm64_neon (27901 MB/sec) Oct 2 20:38:30.110078 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 20:38:30.126827 systemd[1]: Finished dracut-pre-udev.service. Oct 2 20:38:30.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:30.134000 audit: BPF prog-id=7 op=LOAD Oct 2 20:38:30.134000 audit: BPF prog-id=8 op=LOAD Oct 2 20:38:30.135268 systemd[1]: Starting systemd-udevd.service... Oct 2 20:38:30.154586 systemd-udevd[474]: Using default interface naming scheme 'v252'. Oct 2 20:38:30.161183 systemd[1]: Started systemd-udevd.service. Oct 2 20:38:30.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:30.170490 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 20:38:30.197549 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Oct 2 20:38:30.246819 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 20:38:30.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:30.251788 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:38:30.291720 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:38:30.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:30.356085 kernel: hv_vmbus: Vmbus version:5.3 Oct 2 20:38:30.362087 kernel: hv_vmbus: registering driver hid_hyperv Oct 2 20:38:30.372086 kernel: hv_vmbus: registering driver hv_netvsc Oct 2 20:38:30.392152 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Oct 2 20:38:30.392187 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Oct 2 20:38:30.392305 kernel: hv_vmbus: registering driver hv_storvsc Oct 2 20:38:30.400854 kernel: hv_vmbus: registering driver hyperv_keyboard Oct 2 20:38:30.407081 kernel: scsi host1: storvsc_host_t Oct 2 20:38:30.411086 kernel: scsi host0: storvsc_host_t Oct 2 20:38:30.429017 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Oct 2 20:38:30.429042 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Oct 2 20:38:30.435251 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Oct 2 20:38:30.454308 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Oct 2 20:38:30.454465 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 2 20:38:30.455077 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Oct 2 20:38:30.468178 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Oct 2 20:38:30.468326 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Oct 2 20:38:30.471945 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 2 20:38:30.478403 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Oct 2 20:38:30.478538 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Oct 2 20:38:30.484077 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:38:30.488074 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 2 20:38:30.606093 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (544) Oct 2 20:38:30.607805 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 20:38:30.632532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:38:30.669035 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 20:38:30.677620 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 20:38:30.683990 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 20:38:30.695273 systemd[1]: Starting disk-uuid.service... Oct 2 20:38:30.723211 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:38:31.154283 kernel: hv_netvsc 0022487c-1e7d-0022-487c-1e7d0022487c eth0: VF slot 1 added Oct 2 20:38:31.168712 kernel: hv_vmbus: registering driver hv_pci Oct 2 20:38:31.168759 kernel: hv_pci 3ac34845-51e8-429a-b2db-3abaf72b02e4: PCI VMBus probing: Using version 0x10004 Oct 2 20:38:31.184992 kernel: hv_pci 3ac34845-51e8-429a-b2db-3abaf72b02e4: PCI host bridge to bus 51e8:00 Oct 2 20:38:31.185133 kernel: pci_bus 51e8:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Oct 2 20:38:31.185238 kernel: pci_bus 51e8:00: No busn resource found for root bus, will use [bus 00-ff] Oct 2 20:38:31.197474 kernel: pci 51e8:00:02.0: [15b3:1018] type 00 class 0x020000 Oct 2 20:38:31.208727 kernel: pci 51e8:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Oct 2 20:38:31.228126 kernel: pci 51e8:00:02.0: enabling Extended Tags Oct 2 20:38:31.250773 kernel: pci 51e8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 51e8:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Oct 2 20:38:31.250916 kernel: pci_bus 51e8:00: busn_res: [bus 00-ff] end is updated to 00 Oct 2 20:38:31.251000 kernel: pci 51e8:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Oct 2 20:38:31.482087 kernel: mlx5_core 51e8:00:02.0: firmware version: 16.31.2424 Oct 2 20:38:31.649084 kernel: mlx5_core 51e8:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Oct 2 20:38:31.744237 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:38:31.744997 disk-uuid[592]: The operation has completed successfully. Oct 2 20:38:31.798901 kernel: hv_netvsc 0022487c-1e7d-0022-487c-1e7d0022487c eth0: VF registering: eth1 Oct 2 20:38:31.799099 kernel: mlx5_core 51e8:00:02.0 eth1: joined to eth0 Oct 2 20:38:31.813088 kernel: mlx5_core 51e8:00:02.0 enP20968s1: renamed from eth1 Oct 2 20:38:31.842915 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 20:38:31.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:31.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:31.843014 systemd[1]: Finished disk-uuid.service. Oct 2 20:38:31.847756 systemd[1]: Starting verity-setup.service... Oct 2 20:38:31.885100 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 20:38:31.964277 systemd[1]: Found device dev-mapper-usr.device. Oct 2 20:38:31.969191 systemd[1]: Mounting sysusr-usr.mount... Oct 2 20:38:31.979937 systemd[1]: Finished verity-setup.service. Oct 2 20:38:31.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:32.037084 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 20:38:32.037243 systemd[1]: Mounted sysusr-usr.mount. Oct 2 20:38:32.041263 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 20:38:32.042036 systemd[1]: Starting ignition-setup.service... Oct 2 20:38:32.052577 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 20:38:32.082343 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 20:38:32.082386 kernel: BTRFS info (device sda6): using free space tree Oct 2 20:38:32.087637 kernel: BTRFS info (device sda6): has skinny extents Oct 2 20:38:32.128559 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 20:38:32.188406 systemd[1]: Finished ignition-setup.service. Oct 2 20:38:32.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:32.193292 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 20:38:32.241584 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 20:38:32.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:32.250000 audit: BPF prog-id=9 op=LOAD Oct 2 20:38:32.251197 systemd[1]: Starting systemd-networkd.service... Oct 2 20:38:32.279948 systemd-networkd[876]: lo: Link UP Oct 2 20:38:32.279961 systemd-networkd[876]: lo: Gained carrier Oct 2 20:38:32.280362 systemd-networkd[876]: Enumeration completed Oct 2 20:38:32.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:32.283824 systemd-networkd[876]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:38:32.287191 systemd[1]: Started systemd-networkd.service. Oct 2 20:38:32.291531 systemd[1]: Reached target network.target. Oct 2 20:38:32.299639 systemd[1]: Starting iscsiuio.service... Oct 2 20:38:32.320198 systemd[1]: Started iscsiuio.service. Oct 2 20:38:32.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:32.328639 systemd[1]: Starting iscsid.service... Oct 2 20:38:32.335566 iscsid[881]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:38:32.335566 iscsid[881]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 2 20:38:32.335566 iscsid[881]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 20:38:32.335566 iscsid[881]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 20:38:32.335566 iscsid[881]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 20:38:32.335566 iscsid[881]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:38:32.335566 iscsid[881]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 20:38:32.426124 kernel: mlx5_core 51e8:00:02.0 enP20968s1: Link up Oct 2 20:38:32.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:32.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:32.342883 systemd[1]: Started iscsid.service. Oct 2 20:38:32.363172 systemd[1]: Starting dracut-initqueue.service... Oct 2 20:38:32.405306 systemd[1]: Finished dracut-initqueue.service. Oct 2 20:38:32.419783 systemd[1]: Reached target remote-fs-pre.target. Oct 2 20:38:32.430529 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:38:32.438642 systemd[1]: Reached target remote-fs.target. Oct 2 20:38:32.447387 systemd[1]: Starting dracut-pre-mount.service... Oct 2 20:38:32.473148 systemd[1]: Finished dracut-pre-mount.service. Oct 2 20:38:32.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:32.502568 kernel: hv_netvsc 0022487c-1e7d-0022-487c-1e7d0022487c eth0: Data path switched to VF: enP20968s1 Oct 2 20:38:32.502729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 20:38:32.502922 systemd-networkd[876]: enP20968s1: Link UP Oct 2 20:38:32.503128 systemd-networkd[876]: eth0: Link UP Oct 2 20:38:32.503485 systemd-networkd[876]: eth0: Gained carrier Oct 2 20:38:32.515483 systemd-networkd[876]: enP20968s1: Gained carrier Oct 2 20:38:32.532116 systemd-networkd[876]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 2 20:38:33.082649 ignition[846]: Ignition 2.14.0 Oct 2 20:38:33.085544 ignition[846]: Stage: fetch-offline Oct 2 20:38:33.085649 ignition[846]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:38:33.085691 ignition[846]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 20:38:33.121858 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 20:38:33.122048 ignition[846]: parsed url from cmdline: "" Oct 2 20:38:33.122052 ignition[846]: no config URL provided Oct 2 20:38:33.122076 ignition[846]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:38:33.122085 ignition[846]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:38:33.167933 kernel: kauditd_printk_skb: 18 callbacks suppressed Oct 2 20:38:33.167955 kernel: audit: type=1130 audit(1696279113.140:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.132403 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 20:38:33.122091 ignition[846]: failed to fetch config: resource requires networking Oct 2 20:38:33.141748 systemd[1]: Starting ignition-fetch.service... Oct 2 20:38:33.122498 ignition[846]: Ignition finished successfully Oct 2 20:38:33.158378 ignition[900]: Ignition 2.14.0 Oct 2 20:38:33.158385 ignition[900]: Stage: fetch Oct 2 20:38:33.158483 ignition[900]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:38:33.158501 ignition[900]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 20:38:33.161280 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 20:38:33.161408 ignition[900]: parsed url from cmdline: "" Oct 2 20:38:33.161412 ignition[900]: no config URL provided Oct 2 20:38:33.161416 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:38:33.161424 ignition[900]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:38:33.214787 unknown[900]: fetched base config from "system" Oct 2 20:38:33.161450 ignition[900]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Oct 2 20:38:33.214794 unknown[900]: fetched base config from "system" Oct 2 20:38:33.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.195602 ignition[900]: GET result: OK Oct 2 20:38:33.214800 unknown[900]: fetched user config from "azure" Oct 2 20:38:33.195679 ignition[900]: config has been read from IMDS userdata Oct 2 20:38:33.272739 kernel: audit: type=1130 audit(1696279113.238:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.234335 systemd[1]: Finished ignition-fetch.service. Oct 2 20:38:33.195724 ignition[900]: parsing config with SHA512: 0ae95dfceba7660a9ee95aeb3344dffeddd5fa20271b36d6cc3d77632158d4d333880b545f33cdfe9079a94b73eee4ef5497eeffa2eb09809b259734063e27f8 Oct 2 20:38:33.239567 systemd[1]: Starting ignition-kargs.service... Oct 2 20:38:33.225240 ignition[900]: fetch: fetch complete Oct 2 20:38:33.225247 ignition[900]: fetch: fetch passed Oct 2 20:38:33.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.285391 systemd[1]: Finished ignition-kargs.service. Oct 2 20:38:33.225299 ignition[900]: Ignition finished successfully Oct 2 20:38:33.336411 kernel: audit: type=1130 audit(1696279113.289:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.336437 kernel: audit: type=1130 audit(1696279113.319:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.290263 systemd[1]: Starting ignition-disks.service... Oct 2 20:38:33.275096 ignition[906]: Ignition 2.14.0 Oct 2 20:38:33.315294 systemd[1]: Finished ignition-disks.service. Oct 2 20:38:33.275102 ignition[906]: Stage: kargs Oct 2 20:38:33.319769 systemd[1]: Reached target initrd-root-device.target. Oct 2 20:38:33.275206 ignition[906]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:38:33.345150 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:38:33.275230 ignition[906]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 20:38:33.351602 systemd[1]: Reached target local-fs.target. Oct 2 20:38:33.282528 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 20:38:33.358984 systemd[1]: Reached target sysinit.target. Oct 2 20:38:33.283615 ignition[906]: kargs: kargs passed Oct 2 20:38:33.365511 systemd[1]: Reached target basic.target. Oct 2 20:38:33.283659 ignition[906]: Ignition finished successfully Oct 2 20:38:33.378471 systemd[1]: Starting systemd-fsck-root.service... Oct 2 20:38:33.305944 ignition[912]: Ignition 2.14.0 Oct 2 20:38:33.417040 systemd-fsck[920]: ROOT: clean, 603/7326000 files, 481067/7359488 blocks Oct 2 20:38:33.305950 ignition[912]: Stage: disks Oct 2 20:38:33.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.420058 systemd[1]: Finished systemd-fsck-root.service. Oct 2 20:38:33.306087 ignition[912]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:38:33.459954 kernel: audit: type=1130 audit(1696279113.427:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.446109 systemd[1]: Mounting sysroot.mount... Oct 2 20:38:33.306105 ignition[912]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 20:38:33.312515 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 20:38:33.314417 ignition[912]: disks: disks passed Oct 2 20:38:33.314467 ignition[912]: Ignition finished successfully Oct 2 20:38:33.482153 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 20:38:33.479779 systemd[1]: Mounted sysroot.mount. Oct 2 20:38:33.485557 systemd[1]: Reached target initrd-root-fs.target. Oct 2 20:38:33.501790 systemd[1]: Mounting sysroot-usr.mount... Oct 2 20:38:33.511445 systemd[1]: Starting flatcar-metadata-hostname.service... Oct 2 20:38:33.521546 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 20:38:33.521587 systemd[1]: Reached target ignition-diskful.target. Oct 2 20:38:33.535950 systemd[1]: Mounted sysroot-usr.mount. Oct 2 20:38:33.553432 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:38:33.564267 systemd[1]: Starting initrd-setup-root.service... Oct 2 20:38:33.582076 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (930) Oct 2 20:38:33.582507 initrd-setup-root[935]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 20:38:33.599477 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 20:38:33.599530 kernel: BTRFS info (device sda6): using free space tree Oct 2 20:38:33.603984 kernel: BTRFS info (device sda6): has skinny extents Oct 2 20:38:33.604004 initrd-setup-root[943]: cut: /sysroot/etc/group: No such file or directory Oct 2 20:38:33.617411 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 20:38:33.637837 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 20:38:33.632879 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:38:33.788757 systemd[1]: Finished initrd-setup-root.service. Oct 2 20:38:33.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.812603 systemd[1]: Starting ignition-mount.service... Oct 2 20:38:33.821035 systemd[1]: Starting sysroot-boot.service... Oct 2 20:38:33.827856 kernel: audit: type=1130 audit(1696279113.793:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.832554 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 20:38:33.837457 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 20:38:33.852760 systemd[1]: Finished sysroot-boot.service. Oct 2 20:38:33.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.877272 kernel: audit: type=1130 audit(1696279113.857:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.890789 ignition[1002]: INFO : Ignition 2.14.0 Oct 2 20:38:33.894566 ignition[1002]: INFO : Stage: mount Oct 2 20:38:33.894566 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:38:33.894566 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 20:38:33.935018 kernel: audit: type=1130 audit(1696279113.907:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:33.935112 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 20:38:33.935112 ignition[1002]: INFO : mount: mount passed Oct 2 20:38:33.935112 ignition[1002]: INFO : Ignition finished successfully Oct 2 20:38:33.904016 systemd[1]: Finished ignition-mount.service. Oct 2 20:38:34.039629 coreos-metadata[929]: Oct 02 20:38:34.039 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 2 20:38:34.047365 coreos-metadata[929]: Oct 02 20:38:34.047 INFO Fetch successful Oct 2 20:38:34.076819 coreos-metadata[929]: Oct 02 20:38:34.076 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Oct 2 20:38:34.090140 coreos-metadata[929]: Oct 02 20:38:34.090 INFO Fetch successful Oct 2 20:38:34.095267 coreos-metadata[929]: Oct 02 20:38:34.095 INFO wrote hostname ci-3510.3.0-a-b6df30be81 to /sysroot/etc/hostname Oct 2 20:38:34.103237 systemd[1]: Finished flatcar-metadata-hostname.service. Oct 2 20:38:34.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:34.108816 systemd[1]: Starting ignition-files.service... Oct 2 20:38:34.135266 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:38:34.144345 kernel: audit: type=1130 audit(1696279114.107:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:34.153081 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1010) Oct 2 20:38:34.164164 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 20:38:34.164190 kernel: BTRFS info (device sda6): using free space tree Oct 2 20:38:34.164207 kernel: BTRFS info (device sda6): has skinny extents Oct 2 20:38:34.172563 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:38:34.192407 ignition[1029]: INFO : Ignition 2.14.0 Oct 2 20:38:34.196353 ignition[1029]: INFO : Stage: files Oct 2 20:38:34.196353 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:38:34.196353 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 20:38:34.217875 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 20:38:34.217875 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Oct 2 20:38:34.217875 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 20:38:34.217875 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 20:38:34.229667 systemd-networkd[876]: eth0: Gained IPv6LL Oct 2 20:38:34.247773 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 20:38:34.247773 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 20:38:34.261591 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 20:38:34.261591 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 20:38:34.261591 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 20:38:34.250971 unknown[1029]: wrote ssh authorized keys file for user: core Oct 2 20:38:34.616679 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 20:38:34.851176 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 20:38:34.866115 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 20:38:34.866115 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 20:38:34.866115 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 20:38:34.939901 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 20:38:35.023569 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 20:38:35.038228 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 20:38:35.038228 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:38:35.038228 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 20:38:35.147727 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 20:38:35.477413 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 20:38:35.492195 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:38:35.492195 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:38:35.492195 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 20:38:35.538018 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 20:38:36.604485 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 20:38:36.621407 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:38:36.621407 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 20:38:36.621407 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 20:38:36.621407 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:38:36.621407 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:38:36.621407 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Oct 2 20:38:36.621407 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 20:38:36.701215 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1029) Oct 2 20:38:36.673006 systemd[1]: mnt-oem4280688984.mount: Deactivated successfully. Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4280688984" Oct 2 20:38:36.707196 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4280688984": device or resource busy Oct 2 20:38:36.707196 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4280688984", trying btrfs: device or resource busy Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4280688984" Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4280688984" Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem4280688984" Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem4280688984" Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4194357853" Oct 2 20:38:36.707196 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(d): op(e): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4194357853": device or resource busy Oct 2 20:38:36.707196 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(d): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4194357853", trying btrfs: device or resource busy Oct 2 20:38:36.707196 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4194357853" Oct 2 20:38:36.910721 kernel: audit: type=1130 audit(1696279116.723:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.708946 systemd[1]: Finished ignition-files.service. Oct 2 20:38:36.918122 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(f): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4194357853" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(10): [started] unmounting "/mnt/oem4194357853" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): op(10): [finished] unmounting "/mnt/oem4194357853" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(11): [started] processing unit "waagent.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(11): [finished] processing unit "waagent.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(12): [started] processing unit "nvidia.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(12): [finished] processing unit "nvidia.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(13): [started] processing unit "prepare-cni-plugins.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(13): op(14): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(13): op(14): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(13): [finished] processing unit "prepare-cni-plugins.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(15): [started] processing unit "prepare-critools.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(15): op(16): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(15): op(16): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(15): [finished] processing unit "prepare-critools.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(18): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:38:36.918122 ignition[1029]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:38:36.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.726203 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 20:38:37.158502 ignition[1029]: INFO : files: op(19): [started] setting preset to enabled for "prepare-critools.service" Oct 2 20:38:37.158502 ignition[1029]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 20:38:37.158502 ignition[1029]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Oct 2 20:38:37.158502 ignition[1029]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Oct 2 20:38:37.158502 ignition[1029]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:38:37.158502 ignition[1029]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:38:37.158502 ignition[1029]: INFO : files: files passed Oct 2 20:38:37.158502 ignition[1029]: INFO : Ignition finished successfully Oct 2 20:38:37.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.257764 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 20:38:37.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.764599 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 20:38:37.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.775271 systemd[1]: Starting ignition-quench.service... Oct 2 20:38:37.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.798301 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 20:38:37.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.806350 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 20:38:37.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.806420 systemd[1]: Finished ignition-quench.service. Oct 2 20:38:36.818454 systemd[1]: Reached target ignition-complete.target. Oct 2 20:38:37.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.832498 systemd[1]: Starting initrd-parse-etc.service... Oct 2 20:38:37.344300 ignition[1067]: INFO : Ignition 2.14.0 Oct 2 20:38:37.344300 ignition[1067]: INFO : Stage: umount Oct 2 20:38:37.344300 ignition[1067]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:38:37.344300 ignition[1067]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Oct 2 20:38:37.344300 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Oct 2 20:38:37.344300 ignition[1067]: INFO : umount: umount passed Oct 2 20:38:37.344300 ignition[1067]: INFO : Ignition finished successfully Oct 2 20:38:37.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.871871 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 20:38:36.871955 systemd[1]: Finished initrd-parse-etc.service. Oct 2 20:38:36.885313 systemd[1]: Reached target initrd-fs.target. Oct 2 20:38:37.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.900844 systemd[1]: Reached target initrd.target. Oct 2 20:38:37.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.444000 audit: BPF prog-id=6 op=UNLOAD Oct 2 20:38:36.914544 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 20:38:36.915361 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 20:38:37.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.942818 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 20:38:37.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.954590 systemd[1]: Starting initrd-cleanup.service... Oct 2 20:38:37.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:36.980000 systemd[1]: Stopped target nss-lookup.target. Oct 2 20:38:36.986915 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 20:38:36.997598 systemd[1]: Stopped target timers.target. Oct 2 20:38:37.007674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 20:38:37.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.007747 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 20:38:37.018462 systemd[1]: Stopped target initrd.target. Oct 2 20:38:37.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.032580 systemd[1]: Stopped target basic.target. Oct 2 20:38:37.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.047242 systemd[1]: Stopped target ignition-complete.target. Oct 2 20:38:37.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.058471 systemd[1]: Stopped target ignition-diskful.target. Oct 2 20:38:37.069343 systemd[1]: Stopped target initrd-root-device.target. Oct 2 20:38:37.084328 systemd[1]: Stopped target remote-fs.target. Oct 2 20:38:37.098530 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 20:38:37.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.109868 systemd[1]: Stopped target sysinit.target. Oct 2 20:38:37.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.120327 systemd[1]: Stopped target local-fs.target. Oct 2 20:38:37.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.130948 systemd[1]: Stopped target local-fs-pre.target. Oct 2 20:38:37.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.142611 systemd[1]: Stopped target swap.target. Oct 2 20:38:37.616753 kernel: hv_netvsc 0022487c-1e7d-0022-487c-1e7d0022487c eth0: Data path switched from VF: enP20968s1 Oct 2 20:38:37.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.153727 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 20:38:37.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.153787 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 20:38:37.162444 systemd[1]: Stopped target cryptsetup.target. Oct 2 20:38:37.173644 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 20:38:37.173692 systemd[1]: Stopped dracut-initqueue.service. Oct 2 20:38:37.185195 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 20:38:37.185238 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 20:38:37.196877 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 20:38:37.196911 systemd[1]: Stopped ignition-files.service. Oct 2 20:38:37.207727 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 2 20:38:37.207765 systemd[1]: Stopped flatcar-metadata-hostname.service. Oct 2 20:38:37.222043 systemd[1]: Stopping ignition-mount.service... Oct 2 20:38:37.234519 systemd[1]: Stopping iscsiuio.service... Oct 2 20:38:37.240658 systemd[1]: Stopping sysroot-boot.service... Oct 2 20:38:37.246564 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 20:38:37.246644 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 20:38:37.252490 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 20:38:37.252532 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 20:38:37.271225 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 20:38:37.271331 systemd[1]: Stopped iscsiuio.service. Oct 2 20:38:37.284959 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 20:38:37.285045 systemd[1]: Finished initrd-cleanup.service. Oct 2 20:38:37.290257 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 20:38:37.290757 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 20:38:37.290837 systemd[1]: Stopped ignition-mount.service. Oct 2 20:38:37.299945 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 20:38:37.299990 systemd[1]: Stopped ignition-disks.service. Oct 2 20:38:37.309052 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 20:38:37.309107 systemd[1]: Stopped ignition-kargs.service. Oct 2 20:38:37.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:37.313323 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 20:38:37.313363 systemd[1]: Stopped ignition-fetch.service. Oct 2 20:38:37.317446 systemd[1]: Stopped target network.target. Oct 2 20:38:37.326648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 20:38:37.326700 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 20:38:37.335177 systemd[1]: Stopped target paths.target. Oct 2 20:38:37.347976 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 20:38:37.351085 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 20:38:37.356121 systemd[1]: Stopped target slices.target. Oct 2 20:38:37.363244 systemd[1]: Stopped target sockets.target. Oct 2 20:38:37.814585 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Oct 2 20:38:37.814616 iscsid[881]: iscsid shutting down. Oct 2 20:38:37.373465 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 20:38:37.373504 systemd[1]: Closed iscsid.socket. Oct 2 20:38:37.389478 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 20:38:37.389520 systemd[1]: Closed iscsiuio.socket. Oct 2 20:38:37.399559 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 20:38:37.399603 systemd[1]: Stopped ignition-setup.service. Oct 2 20:38:37.409630 systemd[1]: Stopping systemd-networkd.service... Oct 2 20:38:37.417638 systemd[1]: Stopping systemd-resolved.service... Oct 2 20:38:37.422107 systemd-networkd[876]: eth0: DHCPv6 lease lost Oct 2 20:38:37.817000 audit: BPF prog-id=9 op=UNLOAD Oct 2 20:38:37.426433 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 20:38:37.426526 systemd[1]: Stopped systemd-networkd.service. Oct 2 20:38:37.436146 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 20:38:37.436229 systemd[1]: Stopped systemd-resolved.service. Oct 2 20:38:37.444287 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 20:38:37.444327 systemd[1]: Closed systemd-networkd.socket. Oct 2 20:38:37.451891 systemd[1]: Stopping network-cleanup.service... Oct 2 20:38:37.462242 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 20:38:37.462302 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 20:38:37.466929 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 20:38:37.466979 systemd[1]: Stopped systemd-sysctl.service. Oct 2 20:38:37.478500 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 20:38:37.478546 systemd[1]: Stopped systemd-modules-load.service. Oct 2 20:38:37.488645 systemd[1]: Stopping systemd-udevd.service... Oct 2 20:38:37.502811 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 20:38:37.502949 systemd[1]: Stopped systemd-udevd.service. Oct 2 20:38:37.510296 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 20:38:37.510332 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 20:38:37.518935 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 20:38:37.518969 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 20:38:37.527773 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 20:38:37.527815 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 20:38:37.531909 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 20:38:37.531941 systemd[1]: Stopped dracut-cmdline.service. Oct 2 20:38:37.539187 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 20:38:37.539224 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 20:38:37.550728 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 20:38:37.565167 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 20:38:37.565235 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 20:38:37.576857 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 20:38:37.576914 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 20:38:37.581128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 20:38:37.581173 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 20:38:37.589474 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 20:38:37.589590 systemd[1]: Stopped sysroot-boot.service. Oct 2 20:38:37.596555 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 20:38:37.596634 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 20:38:37.613649 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 20:38:37.613698 systemd[1]: Stopped initrd-setup-root.service. Oct 2 20:38:37.670434 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 20:38:37.670523 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 20:38:37.735474 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 20:38:37.735569 systemd[1]: Stopped network-cleanup.service. Oct 2 20:38:37.743706 systemd[1]: Reached target initrd-switch-root.target. Oct 2 20:38:37.752290 systemd[1]: Starting initrd-switch-root.service... Oct 2 20:38:37.772001 systemd[1]: Switching root. Oct 2 20:38:37.819005 systemd-journald[276]: Journal stopped Oct 2 20:38:42.235695 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 20:38:42.235714 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 20:38:42.235723 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 20:38:42.235733 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 20:38:42.235741 kernel: SELinux: policy capability open_perms=1 Oct 2 20:38:42.235748 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 20:38:42.235757 kernel: SELinux: policy capability always_check_network=0 Oct 2 20:38:42.235765 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 20:38:42.235773 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 20:38:42.235781 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 20:38:42.235790 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 20:38:42.235798 kernel: kauditd_printk_skb: 42 callbacks suppressed Oct 2 20:38:42.235806 kernel: audit: type=1403 audit(1696279118.448:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 20:38:42.235816 systemd[1]: Successfully loaded SELinux policy in 133.008ms. Oct 2 20:38:42.235826 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.433ms. Oct 2 20:38:42.235838 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:38:42.235847 systemd[1]: Detected virtualization microsoft. Oct 2 20:38:42.235856 systemd[1]: Detected architecture arm64. Oct 2 20:38:42.235865 systemd[1]: Detected first boot. Oct 2 20:38:42.235874 systemd[1]: Hostname set to . Oct 2 20:38:42.235882 systemd[1]: Initializing machine ID from random generator. Oct 2 20:38:42.235892 kernel: audit: type=1400 audit(1696279118.738:82): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:38:42.235902 kernel: audit: type=1400 audit(1696279118.741:83): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:38:42.235911 kernel: audit: type=1334 audit(1696279118.753:84): prog-id=10 op=LOAD Oct 2 20:38:42.235919 kernel: audit: type=1334 audit(1696279118.753:85): prog-id=10 op=UNLOAD Oct 2 20:38:42.235927 kernel: audit: type=1334 audit(1696279118.769:86): prog-id=11 op=LOAD Oct 2 20:38:42.235935 kernel: audit: type=1334 audit(1696279118.769:87): prog-id=11 op=UNLOAD Oct 2 20:38:42.235944 systemd[1]: Populated /etc with preset unit settings. Oct 2 20:38:42.236134 kernel: mlx5_core 51e8:00:02.0: poll_health:739:(pid 0): device's health compromised - reached miss count Oct 2 20:38:42.236151 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:38:42.236161 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:38:42.236171 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:38:42.236180 kernel: audit: type=1334 audit(1696279121.660:88): prog-id=12 op=LOAD Oct 2 20:38:42.236188 kernel: audit: type=1334 audit(1696279121.660:89): prog-id=3 op=UNLOAD Oct 2 20:38:42.236197 kernel: audit: type=1334 audit(1696279121.667:90): prog-id=13 op=LOAD Oct 2 20:38:42.236207 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 20:38:42.236216 systemd[1]: Stopped iscsid.service. Oct 2 20:38:42.236225 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 20:38:42.236234 systemd[1]: Stopped initrd-switch-root.service. Oct 2 20:38:42.236243 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 20:38:42.236254 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 20:38:42.236263 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 20:38:42.236272 systemd[1]: Created slice system-getty.slice. Oct 2 20:38:42.236282 systemd[1]: Created slice system-modprobe.slice. Oct 2 20:38:42.236292 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 20:38:42.236301 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 20:38:42.236310 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 20:38:42.236319 systemd[1]: Created slice user.slice. Oct 2 20:38:42.236328 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:38:42.236338 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 20:38:42.236347 systemd[1]: Set up automount boot.automount. Oct 2 20:38:42.236356 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 20:38:42.236366 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 20:38:42.236375 systemd[1]: Stopped target initrd-fs.target. Oct 2 20:38:42.236384 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 20:38:42.236393 systemd[1]: Reached target integritysetup.target. Oct 2 20:38:42.236403 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:38:42.236412 systemd[1]: Reached target remote-fs.target. Oct 2 20:38:42.236421 systemd[1]: Reached target slices.target. Oct 2 20:38:42.236430 systemd[1]: Reached target swap.target. Oct 2 20:38:42.236440 systemd[1]: Reached target torcx.target. Oct 2 20:38:42.236449 systemd[1]: Reached target veritysetup.target. Oct 2 20:38:42.236458 systemd[1]: Listening on systemd-coredump.socket. Oct 2 20:38:42.236468 systemd[1]: Listening on systemd-initctl.socket. Oct 2 20:38:42.236478 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:38:42.236487 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:38:42.236496 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:38:42.236505 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 20:38:42.236514 systemd[1]: Mounting dev-hugepages.mount... Oct 2 20:38:42.236523 systemd[1]: Mounting dev-mqueue.mount... Oct 2 20:38:42.236533 systemd[1]: Mounting media.mount... Oct 2 20:38:42.236542 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 20:38:42.236551 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 20:38:42.236562 systemd[1]: Mounting tmp.mount... Oct 2 20:38:42.236571 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 20:38:42.236580 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 20:38:42.236590 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:38:42.236599 systemd[1]: Starting modprobe@configfs.service... Oct 2 20:38:42.236608 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 20:38:42.236617 systemd[1]: Starting modprobe@drm.service... Oct 2 20:38:42.236626 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 20:38:42.236635 systemd[1]: Starting modprobe@fuse.service... Oct 2 20:38:42.236646 systemd[1]: Starting modprobe@loop.service... Oct 2 20:38:42.236655 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 20:38:42.236664 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 20:38:42.236673 kernel: fuse: init (API version 7.34) Oct 2 20:38:42.236682 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 20:38:42.236691 kernel: loop: module loaded Oct 2 20:38:42.236700 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 20:38:42.236709 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 20:38:42.236718 systemd[1]: Stopped systemd-journald.service. Oct 2 20:38:42.236729 systemd[1]: systemd-journald.service: Consumed 2.575s CPU time. Oct 2 20:38:42.236739 systemd[1]: Starting systemd-journald.service... Oct 2 20:38:42.236748 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:38:42.236757 systemd[1]: Starting systemd-network-generator.service... Oct 2 20:38:42.236766 systemd[1]: Starting systemd-remount-fs.service... Oct 2 20:38:42.236775 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:38:42.236785 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 20:38:42.236794 systemd[1]: Stopped verity-setup.service. Oct 2 20:38:42.236805 systemd-journald[1207]: Journal started Oct 2 20:38:42.236842 systemd-journald[1207]: Runtime Journal (/run/log/journal/14259ec1574e4ac2aa308f94d7dd82e7) is 8.0M, max 78.6M, 70.6M free. Oct 2 20:38:38.448000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 20:38:38.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:38:38.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:38:38.753000 audit: BPF prog-id=10 op=LOAD Oct 2 20:38:38.753000 audit: BPF prog-id=10 op=UNLOAD Oct 2 20:38:38.769000 audit: BPF prog-id=11 op=LOAD Oct 2 20:38:38.769000 audit: BPF prog-id=11 op=UNLOAD Oct 2 20:38:41.660000 audit: BPF prog-id=12 op=LOAD Oct 2 20:38:41.660000 audit: BPF prog-id=3 op=UNLOAD Oct 2 20:38:41.667000 audit: BPF prog-id=13 op=LOAD Oct 2 20:38:41.672000 audit: BPF prog-id=14 op=LOAD Oct 2 20:38:41.672000 audit: BPF prog-id=4 op=UNLOAD Oct 2 20:38:41.672000 audit: BPF prog-id=5 op=UNLOAD Oct 2 20:38:41.677000 audit: BPF prog-id=15 op=LOAD Oct 2 20:38:41.677000 audit: BPF prog-id=12 op=UNLOAD Oct 2 20:38:41.677000 audit: BPF prog-id=16 op=LOAD Oct 2 20:38:41.677000 audit: BPF prog-id=17 op=LOAD Oct 2 20:38:41.677000 audit: BPF prog-id=13 op=UNLOAD Oct 2 20:38:41.677000 audit: BPF prog-id=14 op=UNLOAD Oct 2 20:38:41.678000 audit: BPF prog-id=18 op=LOAD Oct 2 20:38:41.678000 audit: BPF prog-id=15 op=UNLOAD Oct 2 20:38:41.678000 audit: BPF prog-id=19 op=LOAD Oct 2 20:38:41.678000 audit: BPF prog-id=20 op=LOAD Oct 2 20:38:41.678000 audit: BPF prog-id=16 op=UNLOAD Oct 2 20:38:41.678000 audit: BPF prog-id=17 op=UNLOAD Oct 2 20:38:41.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:41.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:41.696000 audit: BPF prog-id=18 op=UNLOAD Oct 2 20:38:41.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:41.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.170000 audit: BPF prog-id=21 op=LOAD Oct 2 20:38:42.170000 audit: BPF prog-id=22 op=LOAD Oct 2 20:38:42.170000 audit: BPF prog-id=23 op=LOAD Oct 2 20:38:42.170000 audit: BPF prog-id=19 op=UNLOAD Oct 2 20:38:42.170000 audit: BPF prog-id=20 op=UNLOAD Oct 2 20:38:42.233000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 20:38:42.233000 audit[1207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe6a99ed0 a2=4000 a3=1 items=0 ppid=1 pid=1207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:38:42.233000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 20:38:39.181803 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:38:41.659023 systemd[1]: Queued start job for default target multi-user.target. Oct 2 20:38:39.193727 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:38:41.678667 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 20:38:39.193761 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:38:41.679019 systemd[1]: systemd-journald.service: Consumed 2.575s CPU time. Oct 2 20:38:39.193805 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 20:38:39.193815 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 20:38:39.193850 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 20:38:39.193864 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 20:38:39.194107 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 20:38:39.194143 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:38:39.194155 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:38:39.194656 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 20:38:39.194698 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 20:38:39.194716 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 20:38:39.194730 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 20:38:39.194756 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 20:38:39.194770 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 20:38:41.214987 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:41Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:38:41.215263 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:41Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:38:41.215357 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:41Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:38:41.215504 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:41Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:38:41.215551 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:41Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 20:38:41.215605 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2023-10-02T20:38:41Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 20:38:42.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.253980 systemd[1]: Started systemd-journald.service. Oct 2 20:38:42.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.255717 systemd[1]: Mounted dev-hugepages.mount. Oct 2 20:38:42.259996 systemd[1]: Mounted dev-mqueue.mount. Oct 2 20:38:42.269505 systemd[1]: Mounted media.mount. Oct 2 20:38:42.272852 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 20:38:42.277208 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 20:38:42.281298 systemd[1]: Mounted tmp.mount. Oct 2 20:38:42.284666 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 20:38:42.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.289125 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:38:42.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.293691 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 20:38:42.293809 systemd[1]: Finished modprobe@configfs.service. Oct 2 20:38:42.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.298353 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 20:38:42.298465 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 20:38:42.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.302748 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 20:38:42.302861 systemd[1]: Finished modprobe@drm.service. Oct 2 20:38:42.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.307094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 20:38:42.307203 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 20:38:42.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.312167 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 20:38:42.312281 systemd[1]: Finished modprobe@fuse.service. Oct 2 20:38:42.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.316532 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 20:38:42.316640 systemd[1]: Finished modprobe@loop.service. Oct 2 20:38:42.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.320714 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:38:42.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.325308 systemd[1]: Finished systemd-network-generator.service. Oct 2 20:38:42.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.329993 systemd[1]: Finished systemd-remount-fs.service. Oct 2 20:38:42.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.334564 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:38:42.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.339357 systemd[1]: Reached target network-pre.target. Oct 2 20:38:42.344875 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 20:38:42.349771 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 20:38:42.353372 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 20:38:42.362802 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 20:38:42.367613 systemd[1]: Starting systemd-journal-flush.service... Oct 2 20:38:42.371533 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 20:38:42.372442 systemd[1]: Starting systemd-random-seed.service... Oct 2 20:38:42.376315 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 20:38:42.377514 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:38:42.381927 systemd[1]: Starting systemd-sysusers.service... Oct 2 20:38:42.388340 systemd[1]: Starting systemd-udev-settle.service... Oct 2 20:38:42.397035 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 20:38:42.401517 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 20:38:42.411968 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 20:38:42.412298 systemd-journald[1207]: Time spent on flushing to /var/log/journal/14259ec1574e4ac2aa308f94d7dd82e7 is 14.236ms for 1087 entries. Oct 2 20:38:42.412298 systemd-journald[1207]: System Journal (/var/log/journal/14259ec1574e4ac2aa308f94d7dd82e7) is 8.0M, max 2.6G, 2.6G free. Oct 2 20:38:42.465807 systemd-journald[1207]: Received client request to flush runtime journal. Oct 2 20:38:42.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.433953 systemd[1]: Finished systemd-random-seed.service. Oct 2 20:38:42.438594 systemd[1]: Reached target first-boot-complete.target. Oct 2 20:38:42.449093 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:38:42.466971 systemd[1]: Finished systemd-journal-flush.service. Oct 2 20:38:42.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.619673 systemd[1]: Finished systemd-sysusers.service. Oct 2 20:38:42.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:42.625091 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:38:42.740153 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:38:42.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:43.005218 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 20:38:43.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:43.010000 audit: BPF prog-id=24 op=LOAD Oct 2 20:38:43.010000 audit: BPF prog-id=25 op=LOAD Oct 2 20:38:43.010000 audit: BPF prog-id=7 op=UNLOAD Oct 2 20:38:43.010000 audit: BPF prog-id=8 op=UNLOAD Oct 2 20:38:43.011399 systemd[1]: Starting systemd-udevd.service... Oct 2 20:38:43.031649 systemd-udevd[1226]: Using default interface naming scheme 'v252'. Oct 2 20:38:43.087903 systemd[1]: Started systemd-udevd.service. Oct 2 20:38:43.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:43.097000 audit: BPF prog-id=26 op=LOAD Oct 2 20:38:43.099511 systemd[1]: Starting systemd-networkd.service... Oct 2 20:38:43.120000 audit: BPF prog-id=27 op=LOAD Oct 2 20:38:43.120000 audit: BPF prog-id=28 op=LOAD Oct 2 20:38:43.120000 audit: BPF prog-id=29 op=LOAD Oct 2 20:38:43.121389 systemd[1]: Starting systemd-userdbd.service... Oct 2 20:38:43.150796 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 20:38:43.164111 systemd[1]: Started systemd-userdbd.service. Oct 2 20:38:43.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:43.210091 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 20:38:43.255107 kernel: hv_vmbus: registering driver hyperv_fb Oct 2 20:38:43.269098 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Oct 2 20:38:43.269196 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Oct 2 20:38:43.276902 kernel: Console: switching to colour dummy device 80x25 Oct 2 20:38:43.280080 kernel: Console: switching to colour frame buffer device 128x48 Oct 2 20:38:43.297199 kernel: hv_utils: Registering HyperV Utility Driver Oct 2 20:38:43.297262 kernel: hv_vmbus: registering driver hv_utils Oct 2 20:38:43.246000 audit[1233]: AVC avc: denied { confidentiality } for pid=1233 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 20:38:43.303109 kernel: hv_vmbus: registering driver hv_balloon Oct 2 20:38:43.309182 kernel: hv_utils: Heartbeat IC version 3.0 Oct 2 20:38:43.309466 kernel: hv_utils: Shutdown IC version 3.2 Oct 2 20:38:43.309485 kernel: hv_utils: TimeSync IC version 4.0 Oct 2 20:38:43.735401 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Oct 2 20:38:43.735455 kernel: hv_balloon: Memory hot add disabled on ARM64 Oct 2 20:38:43.246000 audit[1233]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaafc16da20 a1=aa2c a2=ffffbc2d24b0 a3=aaaafc0cd010 items=10 ppid=1226 pid=1233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:38:43.246000 audit: CWD cwd="/" Oct 2 20:38:43.246000 audit: PATH item=0 name=(null) inode=10710 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=1 name=(null) inode=10711 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=2 name=(null) inode=10710 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=3 name=(null) inode=10712 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=4 name=(null) inode=10710 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=5 name=(null) inode=10713 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=6 name=(null) inode=10710 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=7 name=(null) inode=10714 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=8 name=(null) inode=10710 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PATH item=9 name=(null) inode=10715 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:38:43.246000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 20:38:43.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:43.752763 systemd-networkd[1247]: lo: Link UP Oct 2 20:38:43.752772 systemd-networkd[1247]: lo: Gained carrier Oct 2 20:38:43.753177 systemd-networkd[1247]: Enumeration completed Oct 2 20:38:43.753268 systemd[1]: Started systemd-networkd.service. Oct 2 20:38:43.758773 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 20:38:43.769162 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:38:43.811030 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1244) Oct 2 20:38:43.822004 kernel: mlx5_core 51e8:00:02.0 enP20968s1: Link up Oct 2 20:38:43.850836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:38:43.864024 kernel: hv_netvsc 0022487c-1e7d-0022-487c-1e7d0022487c eth0: Data path switched to VF: enP20968s1 Oct 2 20:38:43.865007 systemd-networkd[1247]: enP20968s1: Link UP Oct 2 20:38:43.865104 systemd-networkd[1247]: eth0: Link UP Oct 2 20:38:43.865107 systemd-networkd[1247]: eth0: Gained carrier Oct 2 20:38:43.870245 systemd-networkd[1247]: enP20968s1: Gained carrier Oct 2 20:38:43.878105 systemd-networkd[1247]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 2 20:38:43.902452 systemd[1]: Finished systemd-udev-settle.service. Oct 2 20:38:43.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:43.909230 systemd[1]: Starting lvm2-activation-early.service... Oct 2 20:38:43.910667 kernel: kauditd_printk_skb: 83 callbacks suppressed Oct 2 20:38:43.910690 kernel: audit: type=1130 audit(1696279123.906:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:44.017472 lvm[1304]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:38:44.044975 systemd[1]: Finished lvm2-activation-early.service. Oct 2 20:38:44.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:44.049661 systemd[1]: Reached target cryptsetup.target. Oct 2 20:38:44.066007 kernel: audit: type=1130 audit(1696279124.049:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:44.070697 systemd[1]: Starting lvm2-activation.service... Oct 2 20:38:44.076999 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:38:44.101900 systemd[1]: Finished lvm2-activation.service. Oct 2 20:38:44.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:44.112214 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:38:44.124885 kernel: audit: type=1130 audit(1696279124.106:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:44.125241 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 20:38:44.125364 systemd[1]: Reached target local-fs.target. Oct 2 20:38:44.129501 systemd[1]: Reached target machines.target. Oct 2 20:38:44.134710 systemd[1]: Starting ldconfig.service... Oct 2 20:38:44.138370 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 20:38:44.138528 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:38:44.139581 systemd[1]: Starting systemd-boot-update.service... Oct 2 20:38:44.145169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 20:38:44.152409 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 20:38:44.157436 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:38:44.157489 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:38:44.158512 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 20:38:44.163259 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1307 (bootctl) Oct 2 20:38:44.164663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 20:38:44.193108 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 20:38:44.337576 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 20:38:44.937514 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 20:38:44.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:44.988614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 20:38:45.010009 kernel: audit: type=1130 audit(1696279124.993:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:45.905211 systemd-networkd[1247]: eth0: Gained IPv6LL Oct 2 20:38:45.912337 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 20:38:45.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:45.934016 kernel: audit: type=1130 audit(1696279125.916:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:46.150268 systemd-fsck[1315]: fsck.fat 4.2 (2021-01-31) Oct 2 20:38:46.150268 systemd-fsck[1315]: /dev/sda1: 236 files, 113463/258078 clusters Oct 2 20:38:46.153379 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 20:38:46.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:46.160603 systemd[1]: Mounting boot.mount... Oct 2 20:38:46.182018 kernel: audit: type=1130 audit(1696279126.158:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:46.379155 systemd[1]: Mounted boot.mount. Oct 2 20:38:46.389174 systemd[1]: Finished systemd-boot-update.service. Oct 2 20:38:46.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:46.411026 kernel: audit: type=1130 audit(1696279126.392:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:48.449384 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 20:38:48.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:48.455714 systemd[1]: Starting audit-rules.service... Oct 2 20:38:48.474288 kernel: audit: type=1130 audit(1696279128.453:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:48.475811 systemd[1]: Starting clean-ca-certificates.service... Oct 2 20:38:48.481347 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 20:38:48.485000 audit: BPF prog-id=30 op=LOAD Oct 2 20:38:48.487849 systemd[1]: Starting systemd-resolved.service... Oct 2 20:38:48.495477 kernel: audit: type=1334 audit(1696279128.485:167): prog-id=30 op=LOAD Oct 2 20:38:48.495000 audit: BPF prog-id=31 op=LOAD Oct 2 20:38:48.502402 systemd[1]: Starting systemd-timesyncd.service... Oct 2 20:38:48.506458 kernel: audit: type=1334 audit(1696279128.495:168): prog-id=31 op=LOAD Oct 2 20:38:48.508540 systemd[1]: Starting systemd-update-utmp.service... Oct 2 20:38:48.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:48.743203 systemd[1]: Finished clean-ca-certificates.service. Oct 2 20:38:48.747882 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 20:38:48.934000 audit[1326]: SYSTEM_BOOT pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 20:38:48.939531 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 2 20:38:48.939584 kernel: audit: type=1127 audit(1696279128.934:170): pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 20:38:48.941956 systemd[1]: Finished systemd-update-utmp.service. Oct 2 20:38:48.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:48.978002 kernel: audit: type=1130 audit(1696279128.961:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:49.039443 systemd[1]: Started systemd-timesyncd.service. Oct 2 20:38:49.045574 systemd[1]: Reached target time-set.target. Oct 2 20:38:49.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:49.068004 kernel: audit: type=1130 audit(1696279129.044:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:49.343920 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 20:38:49.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:49.367022 kernel: audit: type=1130 audit(1696279129.348:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:49.388128 systemd-resolved[1324]: Positive Trust Anchors: Oct 2 20:38:49.388142 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:38:49.388171 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:38:49.878440 systemd-resolved[1324]: Using system hostname 'ci-3510.3.0-a-b6df30be81'. Oct 2 20:38:49.880349 systemd[1]: Started systemd-resolved.service. Oct 2 20:38:49.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:49.884642 systemd[1]: Reached target network.target. Oct 2 20:38:49.904608 kernel: audit: type=1130 audit(1696279129.884:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:38:49.904937 systemd[1]: Reached target network-online.target. Oct 2 20:38:49.909626 systemd[1]: Reached target nss-lookup.target. Oct 2 20:38:50.078635 systemd-timesyncd[1325]: Contacted time server 216.229.4.69:123 (0.flatcar.pool.ntp.org). Oct 2 20:38:50.078696 systemd-timesyncd[1325]: Initial clock synchronization to Mon 2023-10-02 20:38:50.073312 UTC. Oct 2 20:38:53.927000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 20:38:53.927000 audit[1342]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffde831880 a2=420 a3=0 items=0 ppid=1321 pid=1342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:38:53.962650 kernel: audit: type=1305 audit(1696279133.927:175): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 20:38:53.962735 kernel: audit: type=1300 audit(1696279133.927:175): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffde831880 a2=420 a3=0 items=0 ppid=1321 pid=1342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:38:53.962767 kernel: audit: type=1327 audit(1696279133.927:175): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 20:38:53.927000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 20:38:53.979089 augenrules[1342]: No rules Oct 2 20:38:53.980278 systemd[1]: Finished audit-rules.service. Oct 2 20:38:57.828400 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 20:38:57.835470 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 20:39:01.518187 ldconfig[1306]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 20:39:01.528324 systemd[1]: Finished ldconfig.service. Oct 2 20:39:01.534192 systemd[1]: Starting systemd-update-done.service... Oct 2 20:39:01.548094 systemd[1]: Finished systemd-update-done.service. Oct 2 20:39:01.552675 systemd[1]: Reached target sysinit.target. Oct 2 20:39:01.556557 systemd[1]: Started motdgen.path. Oct 2 20:39:01.560141 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 20:39:01.566101 systemd[1]: Started logrotate.timer. Oct 2 20:39:01.569871 systemd[1]: Started mdadm.timer. Oct 2 20:39:01.573200 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 20:39:01.577616 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 20:39:01.577649 systemd[1]: Reached target paths.target. Oct 2 20:39:01.581918 systemd[1]: Reached target timers.target. Oct 2 20:39:01.586469 systemd[1]: Listening on dbus.socket. Oct 2 20:39:01.591181 systemd[1]: Starting docker.socket... Oct 2 20:39:01.597533 systemd[1]: Listening on sshd.socket. Oct 2 20:39:01.601234 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:39:01.601664 systemd[1]: Listening on docker.socket. Oct 2 20:39:01.605540 systemd[1]: Reached target sockets.target. Oct 2 20:39:01.609395 systemd[1]: Reached target basic.target. Oct 2 20:39:01.613349 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:39:01.613378 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:39:01.614651 systemd[1]: Starting containerd.service... Oct 2 20:39:01.619554 systemd[1]: Starting dbus.service... Oct 2 20:39:01.623762 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 20:39:01.629273 systemd[1]: Starting extend-filesystems.service... Oct 2 20:39:01.636163 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 20:39:01.638140 systemd[1]: Starting motdgen.service... Oct 2 20:39:01.643114 systemd[1]: Started nvidia.service. Oct 2 20:39:01.647892 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 20:39:01.655073 jq[1352]: false Oct 2 20:39:01.655314 systemd[1]: Starting prepare-critools.service... Oct 2 20:39:01.661861 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 20:39:01.666622 systemd[1]: Starting sshd-keygen.service... Oct 2 20:39:01.673367 systemd[1]: Starting systemd-logind.service... Oct 2 20:39:01.677076 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:39:01.677130 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 20:39:01.677502 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 20:39:01.679068 systemd[1]: Starting update-engine.service... Oct 2 20:39:01.686064 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 20:39:01.693071 jq[1370]: true Oct 2 20:39:01.694794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 20:39:01.694961 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 20:39:01.697081 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 20:39:01.697234 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 20:39:01.731137 jq[1378]: true Oct 2 20:39:01.744685 extend-filesystems[1353]: Found sda Oct 2 20:39:01.744685 extend-filesystems[1353]: Found sda1 Oct 2 20:39:01.744685 extend-filesystems[1353]: Found sda2 Oct 2 20:39:01.744685 extend-filesystems[1353]: Found sda3 Oct 2 20:39:01.744685 extend-filesystems[1353]: Found usr Oct 2 20:39:01.744685 extend-filesystems[1353]: Found sda4 Oct 2 20:39:01.744685 extend-filesystems[1353]: Found sda6 Oct 2 20:39:01.744685 extend-filesystems[1353]: Found sda7 Oct 2 20:39:01.744685 extend-filesystems[1353]: Found sda9 Oct 2 20:39:01.744685 extend-filesystems[1353]: Checking size of /dev/sda9 Oct 2 20:39:01.740821 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 20:39:01.865399 env[1383]: time="2023-10-02T20:39:01.853401694Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 20:39:01.865581 tar[1376]: crictl Oct 2 20:39:01.793728 dbus-daemon[1351]: [system] SELinux support is enabled Oct 2 20:39:01.865888 extend-filesystems[1353]: Old size kept for /dev/sda9 Oct 2 20:39:01.865888 extend-filesystems[1353]: Found sr0 Oct 2 20:39:01.929192 tar[1375]: ./ Oct 2 20:39:01.929192 tar[1375]: ./macvlan Oct 2 20:39:01.740971 systemd[1]: Finished motdgen.service. Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.910353768Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.912625381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.914551734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.914580169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.914783494Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.914802611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.914816049Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.914825967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.914896115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929488 env[1383]: time="2023-10-02T20:39:01.915230098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929737 bash[1417]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:39:01.793865 systemd[1]: Started dbus.service. Oct 2 20:39:01.929898 env[1383]: time="2023-10-02T20:39:01.915345679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:39:01.929898 env[1383]: time="2023-10-02T20:39:01.915360196Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 20:39:01.929898 env[1383]: time="2023-10-02T20:39:01.915412667Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 20:39:01.929898 env[1383]: time="2023-10-02T20:39:01.915424505Z" level=info msg="metadata content store policy set" policy=shared Oct 2 20:39:01.814302 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 20:39:01.814458 systemd[1]: Finished extend-filesystems.service. Oct 2 20:39:01.821367 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 20:39:01.821388 systemd[1]: Reached target system-config.target. Oct 2 20:39:01.828269 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 20:39:01.828285 systemd[1]: Reached target user-config.target. Oct 2 20:39:01.865154 systemd-logind[1366]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Oct 2 20:39:01.874104 systemd-logind[1366]: New seat seat0. Oct 2 20:39:01.885799 systemd[1]: Started systemd-logind.service. Oct 2 20:39:01.928873 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 20:39:01.938618 update_engine[1368]: I1002 20:39:01.925858 1368 main.cc:92] Flatcar Update Engine starting Oct 2 20:39:01.940479 tar[1375]: ./static Oct 2 20:39:01.942594 env[1383]: time="2023-10-02T20:39:01.942550092Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 20:39:01.942659 env[1383]: time="2023-10-02T20:39:01.942597004Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 20:39:01.942659 env[1383]: time="2023-10-02T20:39:01.942618480Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 20:39:01.942659 env[1383]: time="2023-10-02T20:39:01.942649795Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.942720 env[1383]: time="2023-10-02T20:39:01.942664312Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.942720 env[1383]: time="2023-10-02T20:39:01.942678950Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.942720 env[1383]: time="2023-10-02T20:39:01.942692188Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.943522 env[1383]: time="2023-10-02T20:39:01.943220258Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.943522 env[1383]: time="2023-10-02T20:39:01.943247053Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.943522 env[1383]: time="2023-10-02T20:39:01.943260251Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.943522 env[1383]: time="2023-10-02T20:39:01.943272209Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.943522 env[1383]: time="2023-10-02T20:39:01.943286087Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 20:39:01.943642 env[1383]: time="2023-10-02T20:39:01.943547122Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 20:39:01.943666 env[1383]: time="2023-10-02T20:39:01.943654104Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944092709Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944138182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944154739Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944234085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944248323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944260361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944271799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944447449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944463326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944475724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944487642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944501640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944655054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944690048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945049 env[1383]: time="2023-10-02T20:39:01.944704965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945395 env[1383]: time="2023-10-02T20:39:01.944716923Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 20:39:01.945395 env[1383]: time="2023-10-02T20:39:01.944733560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 20:39:01.945395 env[1383]: time="2023-10-02T20:39:01.944746158Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 20:39:01.945395 env[1383]: time="2023-10-02T20:39:01.944772274Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 20:39:01.945395 env[1383]: time="2023-10-02T20:39:01.944806108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 20:39:01.945492 env[1383]: time="2023-10-02T20:39:01.945044347Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 20:39:01.945492 env[1383]: time="2023-10-02T20:39:01.945112656Z" level=info msg="Connect containerd service" Oct 2 20:39:01.945492 env[1383]: time="2023-10-02T20:39:01.945153289Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.945889444Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.946075132Z" level=info msg="Start subscribing containerd event" Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.946115925Z" level=info msg="Start recovering state" Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.946173915Z" level=info msg="Start event monitor" Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.946210149Z" level=info msg="Start snapshots syncer" Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.946219668Z" level=info msg="Start cni network conf syncer for default" Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.946226906Z" level=info msg="Start streaming server" Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.946597123Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 20:39:01.951181 env[1383]: time="2023-10-02T20:39:01.946637517Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 20:39:01.946845 systemd[1]: Started containerd.service. Oct 2 20:39:01.951553 systemd[1]: Started update-engine.service. Oct 2 20:39:01.951778 update_engine[1368]: I1002 20:39:01.951613 1368 update_check_scheduler.cc:74] Next update check in 11m37s Oct 2 20:39:01.959968 systemd[1]: Started locksmithd.service. Oct 2 20:39:01.977444 env[1383]: time="2023-10-02T20:39:01.977407923Z" level=info msg="containerd successfully booted in 0.134643s" Oct 2 20:39:02.004492 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 20:39:02.014215 tar[1375]: ./vlan Oct 2 20:39:02.121561 tar[1375]: ./portmap Oct 2 20:39:02.175971 tar[1375]: ./host-local Oct 2 20:39:02.219368 tar[1375]: ./vrf Oct 2 20:39:02.274951 tar[1375]: ./bridge Oct 2 20:39:02.285831 systemd[1]: Finished prepare-critools.service. Oct 2 20:39:02.316230 tar[1375]: ./tuning Oct 2 20:39:02.343561 tar[1375]: ./firewall Oct 2 20:39:02.377799 tar[1375]: ./host-device Oct 2 20:39:02.408266 tar[1375]: ./sbr Oct 2 20:39:02.435643 tar[1375]: ./loopback Oct 2 20:39:02.462230 tar[1375]: ./dhcp Oct 2 20:39:02.536696 tar[1375]: ./ptp Oct 2 20:39:02.568995 tar[1375]: ./ipvlan Oct 2 20:39:02.600570 tar[1375]: ./bandwidth Oct 2 20:39:02.651288 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 20:39:02.892791 locksmithd[1442]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 20:39:03.455072 sshd_keygen[1382]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 20:39:03.480117 systemd[1]: Finished sshd-keygen.service. Oct 2 20:39:03.485416 systemd[1]: Starting issuegen.service... Oct 2 20:39:03.489698 systemd[1]: Started waagent.service. Oct 2 20:39:03.496377 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 20:39:03.496524 systemd[1]: Finished issuegen.service. Oct 2 20:39:03.501215 systemd[1]: Starting systemd-user-sessions.service... Oct 2 20:39:03.520054 systemd[1]: Finished systemd-user-sessions.service. Oct 2 20:39:03.525979 systemd[1]: Started getty@tty1.service. Oct 2 20:39:03.530817 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 20:39:03.536572 systemd[1]: Reached target getty.target. Oct 2 20:39:03.540433 systemd[1]: Reached target multi-user.target. Oct 2 20:39:03.545948 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 20:39:03.559244 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 20:39:03.559398 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 20:39:03.565804 systemd[1]: Startup finished in 709ms (kernel) + 9.643s (initrd) + 24.874s (userspace) = 35.227s. Oct 2 20:39:03.724948 login[1486]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Oct 2 20:39:03.725490 login[1485]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 20:39:03.742237 systemd[1]: Created slice user-500.slice. Oct 2 20:39:03.743332 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 20:39:03.746036 systemd-logind[1366]: New session 2 of user core. Oct 2 20:39:03.769358 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 20:39:03.770915 systemd[1]: Starting user@500.service... Oct 2 20:39:03.795321 (systemd)[1489]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:39:03.934520 systemd[1489]: Queued start job for default target default.target. Oct 2 20:39:03.935687 systemd[1489]: Reached target paths.target. Oct 2 20:39:03.935812 systemd[1489]: Reached target sockets.target. Oct 2 20:39:03.935887 systemd[1489]: Reached target timers.target. Oct 2 20:39:03.935959 systemd[1489]: Reached target basic.target. Oct 2 20:39:03.936098 systemd[1489]: Reached target default.target. Oct 2 20:39:03.936174 systemd[1]: Started user@500.service. Oct 2 20:39:03.936865 systemd[1489]: Startup finished in 133ms. Oct 2 20:39:03.936998 systemd[1]: Started session-2.scope. Oct 2 20:39:04.728019 login[1486]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 2 20:39:04.732035 systemd-logind[1366]: New session 1 of user core. Oct 2 20:39:04.732722 systemd[1]: Started session-1.scope. Oct 2 20:39:05.677660 waagent[1480]: 2023-10-02T20:39:05.677551Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Oct 2 20:39:05.683530 waagent[1480]: 2023-10-02T20:39:05.683464Z INFO Daemon Daemon OS: flatcar 3510.3.0 Oct 2 20:39:05.688129 waagent[1480]: 2023-10-02T20:39:05.688070Z INFO Daemon Daemon Python: 3.9.16 Oct 2 20:39:05.692785 waagent[1480]: 2023-10-02T20:39:05.692693Z INFO Daemon Daemon Run daemon Oct 2 20:39:05.696768 waagent[1480]: 2023-10-02T20:39:05.696709Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.0' Oct 2 20:39:05.715871 waagent[1480]: 2023-10-02T20:39:05.715756Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Oct 2 20:39:05.730007 waagent[1480]: 2023-10-02T20:39:05.729877Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 2 20:39:05.739388 waagent[1480]: 2023-10-02T20:39:05.739323Z INFO Daemon Daemon cloud-init is enabled: False Oct 2 20:39:05.744082 waagent[1480]: 2023-10-02T20:39:05.744019Z INFO Daemon Daemon Using waagent for provisioning Oct 2 20:39:05.749391 waagent[1480]: 2023-10-02T20:39:05.749330Z INFO Daemon Daemon Activate resource disk Oct 2 20:39:05.753745 waagent[1480]: 2023-10-02T20:39:05.753681Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Oct 2 20:39:05.767239 waagent[1480]: 2023-10-02T20:39:05.767175Z INFO Daemon Daemon Found device: None Oct 2 20:39:05.771482 waagent[1480]: 2023-10-02T20:39:05.771419Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Oct 2 20:39:05.779177 waagent[1480]: 2023-10-02T20:39:05.779117Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Oct 2 20:39:05.790126 waagent[1480]: 2023-10-02T20:39:05.790063Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 2 20:39:05.795420 waagent[1480]: 2023-10-02T20:39:05.795360Z INFO Daemon Daemon Running default provisioning handler Oct 2 20:39:05.810812 waagent[1480]: 2023-10-02T20:39:05.810691Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Oct 2 20:39:05.824729 waagent[1480]: 2023-10-02T20:39:05.824612Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Oct 2 20:39:05.833499 waagent[1480]: 2023-10-02T20:39:05.833436Z INFO Daemon Daemon cloud-init is enabled: False Oct 2 20:39:05.838231 waagent[1480]: 2023-10-02T20:39:05.838172Z INFO Daemon Daemon Copying ovf-env.xml Oct 2 20:39:05.891012 waagent[1480]: 2023-10-02T20:39:05.889681Z INFO Daemon Daemon Successfully mounted dvd Oct 2 20:39:05.936056 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Oct 2 20:39:05.951652 waagent[1480]: 2023-10-02T20:39:05.951551Z INFO Daemon Daemon Detect protocol endpoint Oct 2 20:39:05.956377 waagent[1480]: 2023-10-02T20:39:05.956315Z INFO Daemon Daemon Clean protocol and wireserver endpoint Oct 2 20:39:05.962013 waagent[1480]: 2023-10-02T20:39:05.961939Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Oct 2 20:39:05.968176 waagent[1480]: 2023-10-02T20:39:05.968117Z INFO Daemon Daemon Test for route to 168.63.129.16 Oct 2 20:39:05.973388 waagent[1480]: 2023-10-02T20:39:05.973332Z INFO Daemon Daemon Route to 168.63.129.16 exists Oct 2 20:39:05.978279 waagent[1480]: 2023-10-02T20:39:05.978221Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Oct 2 20:39:06.029203 waagent[1480]: 2023-10-02T20:39:06.029147Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Oct 2 20:39:06.035684 waagent[1480]: 2023-10-02T20:39:06.035642Z INFO Daemon Daemon Wire protocol version:2012-11-30 Oct 2 20:39:06.040748 waagent[1480]: 2023-10-02T20:39:06.040690Z INFO Daemon Daemon Server preferred version:2015-04-05 Oct 2 20:39:06.575395 waagent[1480]: 2023-10-02T20:39:06.575259Z INFO Daemon Daemon Initializing goal state during protocol detection Oct 2 20:39:06.589922 waagent[1480]: 2023-10-02T20:39:06.589855Z INFO Daemon Daemon Forcing an update of the goal state.. Oct 2 20:39:06.595509 waagent[1480]: 2023-10-02T20:39:06.595449Z INFO Daemon Daemon Fetching goal state [incarnation 1] Oct 2 20:39:06.680041 waagent[1480]: 2023-10-02T20:39:06.679902Z INFO Daemon Daemon Found private key matching thumbprint 1D00DB21CD0DB8A91813EB84F5DCB88E8F3303C8 Oct 2 20:39:06.688102 waagent[1480]: 2023-10-02T20:39:06.688038Z INFO Daemon Daemon Certificate with thumbprint 311D639E44AAF270E114D4B8EE01A5D679B44465 has no matching private key. Oct 2 20:39:06.696922 waagent[1480]: 2023-10-02T20:39:06.696862Z INFO Daemon Daemon Fetch goal state completed Oct 2 20:39:06.725601 waagent[1480]: 2023-10-02T20:39:06.725533Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 3674e0ee-161a-49e3-9304-80ba56056390 New eTag: 9401384393962446475] Oct 2 20:39:06.736975 waagent[1480]: 2023-10-02T20:39:06.736895Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Oct 2 20:39:06.751883 waagent[1480]: 2023-10-02T20:39:06.751819Z INFO Daemon Daemon Starting provisioning Oct 2 20:39:06.756705 waagent[1480]: 2023-10-02T20:39:06.756640Z INFO Daemon Daemon Handle ovf-env.xml. Oct 2 20:39:06.761290 waagent[1480]: 2023-10-02T20:39:06.761232Z INFO Daemon Daemon Set hostname [ci-3510.3.0-a-b6df30be81] Oct 2 20:39:07.062501 waagent[1480]: 2023-10-02T20:39:07.047659Z INFO Daemon Daemon Publish hostname [ci-3510.3.0-a-b6df30be81] Oct 2 20:39:07.068893 waagent[1480]: 2023-10-02T20:39:07.068803Z INFO Daemon Daemon Examine /proc/net/route for primary interface Oct 2 20:39:07.074937 waagent[1480]: 2023-10-02T20:39:07.074873Z INFO Daemon Daemon Primary interface is [eth0] Oct 2 20:39:07.094573 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Oct 2 20:39:07.094744 systemd[1]: Stopped systemd-networkd-wait-online.service. Oct 2 20:39:07.094797 systemd[1]: Stopping systemd-networkd-wait-online.service... Oct 2 20:39:07.095035 systemd[1]: Stopping systemd-networkd.service... Oct 2 20:39:07.099028 systemd-networkd[1247]: eth0: DHCPv6 lease lost Oct 2 20:39:07.100740 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 20:39:07.100909 systemd[1]: Stopped systemd-networkd.service. Oct 2 20:39:07.102794 systemd[1]: Starting systemd-networkd.service... Oct 2 20:39:07.134254 systemd-networkd[1532]: enP20968s1: Link UP Oct 2 20:39:07.134267 systemd-networkd[1532]: enP20968s1: Gained carrier Oct 2 20:39:07.135146 systemd-networkd[1532]: eth0: Link UP Oct 2 20:39:07.135157 systemd-networkd[1532]: eth0: Gained carrier Oct 2 20:39:07.135463 systemd-networkd[1532]: lo: Link UP Oct 2 20:39:07.135473 systemd-networkd[1532]: lo: Gained carrier Oct 2 20:39:07.135693 systemd-networkd[1532]: eth0: Gained IPv6LL Oct 2 20:39:07.136749 systemd-networkd[1532]: Enumeration completed Oct 2 20:39:07.136901 systemd[1]: Started systemd-networkd.service. Oct 2 20:39:07.138261 systemd-networkd[1532]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:39:07.138499 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 20:39:07.141756 waagent[1480]: 2023-10-02T20:39:07.141368Z INFO Daemon Daemon Create user account if not exists Oct 2 20:39:07.149207 waagent[1480]: 2023-10-02T20:39:07.149124Z INFO Daemon Daemon User core already exists, skip useradd Oct 2 20:39:07.155399 waagent[1480]: 2023-10-02T20:39:07.155328Z INFO Daemon Daemon Configure sudoer Oct 2 20:39:07.160087 waagent[1480]: 2023-10-02T20:39:07.160017Z INFO Daemon Daemon Configure sshd Oct 2 20:39:07.163960 waagent[1480]: 2023-10-02T20:39:07.163892Z INFO Daemon Daemon Deploy ssh public key. Oct 2 20:39:07.165057 systemd-networkd[1532]: eth0: DHCPv4 address 10.200.20.44/24, gateway 10.200.20.1 acquired from 168.63.129.16 Oct 2 20:39:07.169646 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 20:39:08.384278 waagent[1480]: 2023-10-02T20:39:08.384202Z INFO Daemon Daemon Provisioning complete Oct 2 20:39:08.404030 waagent[1480]: 2023-10-02T20:39:08.403952Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Oct 2 20:39:08.410494 waagent[1480]: 2023-10-02T20:39:08.410426Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Oct 2 20:39:08.419819 waagent[1480]: 2023-10-02T20:39:08.419758Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Oct 2 20:39:08.717077 waagent[1541]: 2023-10-02T20:39:08.716925Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Oct 2 20:39:08.718120 waagent[1541]: 2023-10-02T20:39:08.718067Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 20:39:08.718360 waagent[1541]: 2023-10-02T20:39:08.718313Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 20:39:08.730489 waagent[1541]: 2023-10-02T20:39:08.730426Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Oct 2 20:39:08.730742 waagent[1541]: 2023-10-02T20:39:08.730693Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Oct 2 20:39:08.805899 waagent[1541]: 2023-10-02T20:39:08.805772Z INFO ExtHandler ExtHandler Found private key matching thumbprint 1D00DB21CD0DB8A91813EB84F5DCB88E8F3303C8 Oct 2 20:39:08.806292 waagent[1541]: 2023-10-02T20:39:08.806239Z INFO ExtHandler ExtHandler Certificate with thumbprint 311D639E44AAF270E114D4B8EE01A5D679B44465 has no matching private key. Oct 2 20:39:08.806613 waagent[1541]: 2023-10-02T20:39:08.806564Z INFO ExtHandler ExtHandler Fetch goal state completed Oct 2 20:39:08.820288 waagent[1541]: 2023-10-02T20:39:08.820240Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 997b5460-4b86-441d-a616-fbf9789d8bfc New eTag: 9401384393962446475] Oct 2 20:39:08.820960 waagent[1541]: 2023-10-02T20:39:08.820903Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Oct 2 20:39:08.862300 waagent[1541]: 2023-10-02T20:39:08.862200Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.0; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Oct 2 20:39:08.872319 waagent[1541]: 2023-10-02T20:39:08.872258Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1541 Oct 2 20:39:08.876092 waagent[1541]: 2023-10-02T20:39:08.876034Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 2 20:39:08.877501 waagent[1541]: 2023-10-02T20:39:08.877444Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 2 20:39:08.913224 waagent[1541]: 2023-10-02T20:39:08.913174Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 2 20:39:08.913694 waagent[1541]: 2023-10-02T20:39:08.913641Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 2 20:39:08.923814 waagent[1541]: 2023-10-02T20:39:08.923761Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 2 20:39:08.924453 waagent[1541]: 2023-10-02T20:39:08.924398Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Oct 2 20:39:08.925704 waagent[1541]: 2023-10-02T20:39:08.925641Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Oct 2 20:39:08.927177 waagent[1541]: 2023-10-02T20:39:08.927108Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 2 20:39:08.927409 waagent[1541]: 2023-10-02T20:39:08.927343Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 20:39:08.927971 waagent[1541]: 2023-10-02T20:39:08.927897Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 20:39:08.928592 waagent[1541]: 2023-10-02T20:39:08.928525Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 2 20:39:08.928901 waagent[1541]: 2023-10-02T20:39:08.928844Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 2 20:39:08.928901 waagent[1541]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 2 20:39:08.928901 waagent[1541]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Oct 2 20:39:08.928901 waagent[1541]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 2 20:39:08.928901 waagent[1541]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 2 20:39:08.928901 waagent[1541]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 20:39:08.928901 waagent[1541]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 20:39:08.930952 waagent[1541]: 2023-10-02T20:39:08.930801Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 2 20:39:08.931407 waagent[1541]: 2023-10-02T20:39:08.931334Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 20:39:08.931970 waagent[1541]: 2023-10-02T20:39:08.931906Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 20:39:08.932560 waagent[1541]: 2023-10-02T20:39:08.932487Z INFO EnvHandler ExtHandler Configure routes Oct 2 20:39:08.932712 waagent[1541]: 2023-10-02T20:39:08.932666Z INFO EnvHandler ExtHandler Gateway:None Oct 2 20:39:08.932833 waagent[1541]: 2023-10-02T20:39:08.932790Z INFO EnvHandler ExtHandler Routes:None Oct 2 20:39:08.933405 waagent[1541]: 2023-10-02T20:39:08.933326Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 2 20:39:08.933685 waagent[1541]: 2023-10-02T20:39:08.933624Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 2 20:39:08.934749 waagent[1541]: 2023-10-02T20:39:08.934671Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 2 20:39:08.934941 waagent[1541]: 2023-10-02T20:39:08.934882Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 2 20:39:08.935541 waagent[1541]: 2023-10-02T20:39:08.935466Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 2 20:39:08.948340 waagent[1541]: 2023-10-02T20:39:08.948276Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Oct 2 20:39:08.949137 waagent[1541]: 2023-10-02T20:39:08.949081Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Oct 2 20:39:08.951905 waagent[1541]: 2023-10-02T20:39:08.951845Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Oct 2 20:39:08.961135 waagent[1541]: 2023-10-02T20:39:08.961058Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1532' Oct 2 20:39:08.978933 waagent[1541]: 2023-10-02T20:39:08.978775Z INFO MonitorHandler ExtHandler Network interfaces: Oct 2 20:39:08.978933 waagent[1541]: Executing ['ip', '-a', '-o', 'link']: Oct 2 20:39:08.978933 waagent[1541]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 2 20:39:08.978933 waagent[1541]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:1e:7d brd ff:ff:ff:ff:ff:ff Oct 2 20:39:08.978933 waagent[1541]: 3: enP20968s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:1e:7d brd ff:ff:ff:ff:ff:ff\ altname enP20968p0s2 Oct 2 20:39:08.978933 waagent[1541]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 2 20:39:08.978933 waagent[1541]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 2 20:39:08.978933 waagent[1541]: 2: eth0 inet 10.200.20.44/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 2 20:39:08.978933 waagent[1541]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 2 20:39:08.978933 waagent[1541]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Oct 2 20:39:08.978933 waagent[1541]: 2: eth0 inet6 fe80::222:48ff:fe7c:1e7d/64 scope link \ valid_lft forever preferred_lft forever Oct 2 20:39:09.001840 waagent[1541]: 2023-10-02T20:39:09.001783Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Oct 2 20:39:09.156455 waagent[1541]: 2023-10-02T20:39:09.156373Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Oct 2 20:39:09.423361 waagent[1480]: 2023-10-02T20:39:09.423183Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Oct 2 20:39:09.427081 waagent[1480]: 2023-10-02T20:39:09.426980Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Oct 2 20:39:10.535404 waagent[1584]: 2023-10-02T20:39:10.535310Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Oct 2 20:39:10.536409 waagent[1584]: 2023-10-02T20:39:10.536354Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.0 Oct 2 20:39:10.536641 waagent[1584]: 2023-10-02T20:39:10.536593Z INFO ExtHandler ExtHandler Python: 3.9.16 Oct 2 20:39:10.547026 waagent[1584]: 2023-10-02T20:39:10.546908Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.0; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Oct 2 20:39:10.547508 waagent[1584]: 2023-10-02T20:39:10.547455Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 20:39:10.547798 waagent[1584]: 2023-10-02T20:39:10.547748Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 20:39:10.560413 waagent[1584]: 2023-10-02T20:39:10.560344Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Oct 2 20:39:10.569013 waagent[1584]: 2023-10-02T20:39:10.568941Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Oct 2 20:39:10.570080 waagent[1584]: 2023-10-02T20:39:10.570022Z INFO ExtHandler Oct 2 20:39:10.570320 waagent[1584]: 2023-10-02T20:39:10.570272Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 336b687c-feb3-497f-a57a-a03ac0c336c7 eTag: 9401384393962446475 source: Fabric] Oct 2 20:39:10.571161 waagent[1584]: 2023-10-02T20:39:10.571104Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Oct 2 20:39:10.572511 waagent[1584]: 2023-10-02T20:39:10.572452Z INFO ExtHandler Oct 2 20:39:10.572740 waagent[1584]: 2023-10-02T20:39:10.572693Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Oct 2 20:39:10.578803 waagent[1584]: 2023-10-02T20:39:10.578758Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Oct 2 20:39:10.579375 waagent[1584]: 2023-10-02T20:39:10.579328Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Oct 2 20:39:10.597846 waagent[1584]: 2023-10-02T20:39:10.597792Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Oct 2 20:39:10.681381 waagent[1584]: 2023-10-02T20:39:10.681252Z INFO ExtHandler Downloaded certificate {'thumbprint': '311D639E44AAF270E114D4B8EE01A5D679B44465', 'hasPrivateKey': False} Oct 2 20:39:10.682617 waagent[1584]: 2023-10-02T20:39:10.682560Z INFO ExtHandler Downloaded certificate {'thumbprint': '1D00DB21CD0DB8A91813EB84F5DCB88E8F3303C8', 'hasPrivateKey': True} Oct 2 20:39:10.683757 waagent[1584]: 2023-10-02T20:39:10.683699Z INFO ExtHandler Fetch goal state completed Oct 2 20:39:10.707728 waagent[1584]: 2023-10-02T20:39:10.707663Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1584 Oct 2 20:39:10.711322 waagent[1584]: 2023-10-02T20:39:10.711261Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.0', '', 'Flatcar Container Linux by Kinvolk'] Oct 2 20:39:10.712884 waagent[1584]: 2023-10-02T20:39:10.712826Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Oct 2 20:39:10.719660 waagent[1584]: 2023-10-02T20:39:10.719613Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Oct 2 20:39:10.720175 waagent[1584]: 2023-10-02T20:39:10.720117Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Oct 2 20:39:10.731450 waagent[1584]: 2023-10-02T20:39:10.731398Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Oct 2 20:39:10.732033 waagent[1584]: 2023-10-02T20:39:10.731950Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Oct 2 20:39:10.757877 waagent[1584]: 2023-10-02T20:39:10.757783Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Oct 2 20:39:10.761876 waagent[1584]: 2023-10-02T20:39:10.761783Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Oct 2 20:39:10.765571 waagent[1584]: 2023-10-02T20:39:10.765507Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Oct 2 20:39:10.767246 waagent[1584]: 2023-10-02T20:39:10.767177Z INFO ExtHandler ExtHandler Starting env monitor service. Oct 2 20:39:10.767548 waagent[1584]: 2023-10-02T20:39:10.767469Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 20:39:10.768153 waagent[1584]: 2023-10-02T20:39:10.768089Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 20:39:10.768740 waagent[1584]: 2023-10-02T20:39:10.768671Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Oct 2 20:39:10.769074 waagent[1584]: 2023-10-02T20:39:10.769009Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Oct 2 20:39:10.769074 waagent[1584]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Oct 2 20:39:10.769074 waagent[1584]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Oct 2 20:39:10.769074 waagent[1584]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Oct 2 20:39:10.769074 waagent[1584]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Oct 2 20:39:10.769074 waagent[1584]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 20:39:10.769074 waagent[1584]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Oct 2 20:39:10.771444 waagent[1584]: 2023-10-02T20:39:10.771281Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Oct 2 20:39:10.772038 waagent[1584]: 2023-10-02T20:39:10.771938Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Oct 2 20:39:10.772763 waagent[1584]: 2023-10-02T20:39:10.772682Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Oct 2 20:39:10.773221 waagent[1584]: 2023-10-02T20:39:10.773152Z INFO EnvHandler ExtHandler Configure routes Oct 2 20:39:10.773371 waagent[1584]: 2023-10-02T20:39:10.773317Z INFO EnvHandler ExtHandler Gateway:None Oct 2 20:39:10.773491 waagent[1584]: 2023-10-02T20:39:10.773441Z INFO EnvHandler ExtHandler Routes:None Oct 2 20:39:10.774202 waagent[1584]: 2023-10-02T20:39:10.774129Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Oct 2 20:39:10.776691 waagent[1584]: 2023-10-02T20:39:10.776603Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Oct 2 20:39:10.777744 waagent[1584]: 2023-10-02T20:39:10.777649Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Oct 2 20:39:10.780038 waagent[1584]: 2023-10-02T20:39:10.779859Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Oct 2 20:39:10.780691 waagent[1584]: 2023-10-02T20:39:10.780614Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Oct 2 20:39:10.794903 waagent[1584]: 2023-10-02T20:39:10.794787Z INFO MonitorHandler ExtHandler Network interfaces: Oct 2 20:39:10.794903 waagent[1584]: Executing ['ip', '-a', '-o', 'link']: Oct 2 20:39:10.794903 waagent[1584]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Oct 2 20:39:10.794903 waagent[1584]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:1e:7d brd ff:ff:ff:ff:ff:ff Oct 2 20:39:10.794903 waagent[1584]: 3: enP20968s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:1e:7d brd ff:ff:ff:ff:ff:ff\ altname enP20968p0s2 Oct 2 20:39:10.794903 waagent[1584]: Executing ['ip', '-4', '-a', '-o', 'address']: Oct 2 20:39:10.794903 waagent[1584]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Oct 2 20:39:10.794903 waagent[1584]: 2: eth0 inet 10.200.20.44/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Oct 2 20:39:10.794903 waagent[1584]: Executing ['ip', '-6', '-a', '-o', 'address']: Oct 2 20:39:10.794903 waagent[1584]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Oct 2 20:39:10.794903 waagent[1584]: 2: eth0 inet6 fe80::222:48ff:fe7c:1e7d/64 scope link \ valid_lft forever preferred_lft forever Oct 2 20:39:10.797402 waagent[1584]: 2023-10-02T20:39:10.797330Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Oct 2 20:39:10.800130 waagent[1584]: 2023-10-02T20:39:10.800045Z INFO ExtHandler ExtHandler Downloading manifest Oct 2 20:39:10.848628 waagent[1584]: 2023-10-02T20:39:10.847982Z INFO ExtHandler ExtHandler Oct 2 20:39:10.852957 waagent[1584]: 2023-10-02T20:39:10.852878Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bfb9619b-fad5-414f-9870-09375d3a0a73 correlation f92db5d9-5f5c-43aa-8944-e230785bf681 created: 2023-10-02T20:38:06.830978Z] Oct 2 20:39:10.854445 waagent[1584]: 2023-10-02T20:39:10.854364Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Oct 2 20:39:10.864066 waagent[1584]: 2023-10-02T20:39:10.863839Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 15 ms] Oct 2 20:39:10.876565 waagent[1584]: 2023-10-02T20:39:10.876481Z INFO EnvHandler ExtHandler Current Firewall rules: Oct 2 20:39:10.876565 waagent[1584]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 20:39:10.876565 waagent[1584]: pkts bytes target prot opt in out source destination Oct 2 20:39:10.876565 waagent[1584]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Oct 2 20:39:10.876565 waagent[1584]: pkts bytes target prot opt in out source destination Oct 2 20:39:10.876565 waagent[1584]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Oct 2 20:39:10.876565 waagent[1584]: pkts bytes target prot opt in out source destination Oct 2 20:39:10.876565 waagent[1584]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Oct 2 20:39:10.876565 waagent[1584]: 104 14499 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Oct 2 20:39:10.876565 waagent[1584]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Oct 2 20:39:10.878233 waagent[1584]: 2023-10-02T20:39:10.878172Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Oct 2 20:39:10.890531 waagent[1584]: 2023-10-02T20:39:10.890458Z INFO ExtHandler ExtHandler Looking for existing remote access users. Oct 2 20:39:10.902577 waagent[1584]: 2023-10-02T20:39:10.902500Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 5AE24BA9-F33C-4BA4-8467-989C6D537FDC;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Oct 2 20:39:31.846579 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Oct 2 20:39:43.668968 systemd[1]: Created slice system-sshd.slice. Oct 2 20:39:43.670699 systemd[1]: Started sshd@0-10.200.20.44:22-10.200.12.6:52408.service. Oct 2 20:39:44.169953 sshd[1623]: Accepted publickey for core from 10.200.12.6 port 52408 ssh2: RSA SHA256:pOhi17uv1dMw9wbwzof49dIVAjOqWAX9EZnbvXjLyxI Oct 2 20:39:44.176239 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:39:44.180030 systemd-logind[1366]: New session 3 of user core. Oct 2 20:39:44.180897 systemd[1]: Started session-3.scope. Oct 2 20:39:44.540960 systemd[1]: Started sshd@1-10.200.20.44:22-10.200.12.6:52420.service. Oct 2 20:39:44.965842 sshd[1631]: Accepted publickey for core from 10.200.12.6 port 52420 ssh2: RSA SHA256:pOhi17uv1dMw9wbwzof49dIVAjOqWAX9EZnbvXjLyxI Oct 2 20:39:44.967395 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:39:44.971304 systemd-logind[1366]: New session 4 of user core. Oct 2 20:39:44.971702 systemd[1]: Started session-4.scope. Oct 2 20:39:45.346512 systemd[1]: Started sshd@2-10.200.20.44:22-10.200.12.6:52432.service. Oct 2 20:39:45.615647 sshd[1631]: pam_unix(sshd:session): session closed for user core Oct 2 20:39:45.618144 systemd[1]: sshd@1-10.200.20.44:22-10.200.12.6:52420.service: Deactivated successfully. Oct 2 20:39:45.618899 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 20:39:45.619398 systemd-logind[1366]: Session 4 logged out. Waiting for processes to exit. Oct 2 20:39:45.620077 systemd-logind[1366]: Removed session 4. Oct 2 20:39:45.761321 sshd[1636]: Accepted publickey for core from 10.200.12.6 port 52432 ssh2: RSA SHA256:pOhi17uv1dMw9wbwzof49dIVAjOqWAX9EZnbvXjLyxI Oct 2 20:39:45.762800 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:39:45.766524 systemd-logind[1366]: New session 5 of user core. Oct 2 20:39:45.766908 systemd[1]: Started session-5.scope. Oct 2 20:39:46.061919 sshd[1636]: pam_unix(sshd:session): session closed for user core Oct 2 20:39:46.064483 systemd[1]: sshd@2-10.200.20.44:22-10.200.12.6:52432.service: Deactivated successfully. Oct 2 20:39:46.065139 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 20:39:46.065656 systemd-logind[1366]: Session 5 logged out. Waiting for processes to exit. Oct 2 20:39:46.066457 systemd-logind[1366]: Removed session 5. Oct 2 20:39:46.131691 systemd[1]: Started sshd@3-10.200.20.44:22-10.200.12.6:52440.service. Oct 2 20:39:46.547398 sshd[1643]: Accepted publickey for core from 10.200.12.6 port 52440 ssh2: RSA SHA256:pOhi17uv1dMw9wbwzof49dIVAjOqWAX9EZnbvXjLyxI Oct 2 20:39:46.548891 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:39:46.552751 systemd-logind[1366]: New session 6 of user core. Oct 2 20:39:46.553177 systemd[1]: Started session-6.scope. Oct 2 20:39:46.855395 sshd[1643]: pam_unix(sshd:session): session closed for user core Oct 2 20:39:46.857859 systemd-logind[1366]: Session 6 logged out. Waiting for processes to exit. Oct 2 20:39:46.857941 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 20:39:46.858885 systemd[1]: sshd@3-10.200.20.44:22-10.200.12.6:52440.service: Deactivated successfully. Oct 2 20:39:46.859787 systemd-logind[1366]: Removed session 6. Oct 2 20:39:46.926016 systemd[1]: Started sshd@4-10.200.20.44:22-10.200.12.6:52442.service. Oct 2 20:39:47.339280 update_engine[1368]: I1002 20:39:47.338860 1368 update_attempter.cc:505] Updating boot flags... Oct 2 20:39:47.347481 sshd[1649]: Accepted publickey for core from 10.200.12.6 port 52442 ssh2: RSA SHA256:pOhi17uv1dMw9wbwzof49dIVAjOqWAX9EZnbvXjLyxI Oct 2 20:39:47.348956 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:39:47.352845 systemd-logind[1366]: New session 7 of user core. Oct 2 20:39:47.353270 systemd[1]: Started session-7.scope. Oct 2 20:39:47.680678 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 20:39:47.681196 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:39:47.703092 dbus-daemon[1351]: avc: received setenforce notice (enforcing=1) Oct 2 20:39:47.703948 sudo[1691]: pam_unix(sudo:session): session closed for user root Oct 2 20:39:47.775383 sshd[1649]: pam_unix(sshd:session): session closed for user core Oct 2 20:39:47.778430 systemd-logind[1366]: Session 7 logged out. Waiting for processes to exit. Oct 2 20:39:47.779313 systemd[1]: sshd@4-10.200.20.44:22-10.200.12.6:52442.service: Deactivated successfully. Oct 2 20:39:47.779973 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 20:39:47.780480 systemd-logind[1366]: Removed session 7. Oct 2 20:39:47.851189 systemd[1]: Started sshd@5-10.200.20.44:22-10.200.12.6:54086.service. Oct 2 20:39:48.304332 sshd[1695]: Accepted publickey for core from 10.200.12.6 port 54086 ssh2: RSA SHA256:pOhi17uv1dMw9wbwzof49dIVAjOqWAX9EZnbvXjLyxI Oct 2 20:39:48.306272 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:39:48.309947 systemd-logind[1366]: New session 8 of user core. Oct 2 20:39:48.310385 systemd[1]: Started session-8.scope. Oct 2 20:39:48.564689 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 20:39:48.565267 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:39:48.568725 sudo[1699]: pam_unix(sudo:session): session closed for user root Oct 2 20:39:48.574174 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 20:39:48.574565 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:39:48.584923 systemd[1]: Stopping audit-rules.service... Oct 2 20:39:48.586000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:39:48.586000 audit[1702]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcbd740e0 a2=420 a3=0 items=0 ppid=1 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:48.597298 auditctl[1702]: No rules Oct 2 20:39:48.597723 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 20:39:48.597880 systemd[1]: Stopped audit-rules.service. Oct 2 20:39:48.599225 systemd[1]: Starting audit-rules.service... Oct 2 20:39:48.618145 kernel: audit: type=1305 audit(1696279188.586:176): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:39:48.618219 kernel: audit: type=1300 audit(1696279188.586:176): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcbd740e0 a2=420 a3=0 items=0 ppid=1 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:48.586000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:39:48.624844 kernel: audit: type=1327 audit(1696279188.586:176): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:39:48.624909 kernel: audit: type=1131 audit(1696279188.596:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.649155 augenrules[1719]: No rules Oct 2 20:39:48.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.649980 systemd[1]: Finished audit-rules.service. Oct 2 20:39:48.666012 sudo[1698]: pam_unix(sudo:session): session closed for user root Oct 2 20:39:48.665000 audit[1698]: USER_END pid=1698 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.684231 kernel: audit: type=1130 audit(1696279188.649:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.684313 kernel: audit: type=1106 audit(1696279188.665:179): pid=1698 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.684342 kernel: audit: type=1104 audit(1696279188.665:180): pid=1698 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.665000 audit[1698]: CRED_DISP pid=1698 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.737388 sshd[1695]: pam_unix(sshd:session): session closed for user core Oct 2 20:39:48.737000 audit[1695]: USER_END pid=1695 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:39:48.759682 systemd[1]: sshd@5-10.200.20.44:22-10.200.12.6:54086.service: Deactivated successfully. Oct 2 20:39:48.737000 audit[1695]: CRED_DISP pid=1695 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:39:48.760016 kernel: audit: type=1106 audit(1696279188.737:181): pid=1695 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:39:48.760374 systemd[1]: session-8.scope: Deactivated successfully. Oct 2 20:39:48.777478 systemd-logind[1366]: Session 8 logged out. Waiting for processes to exit. Oct 2 20:39:48.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.44:22-10.200.12.6:54086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.778122 kernel: audit: type=1104 audit(1696279188.737:182): pid=1695 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:39:48.794475 systemd-logind[1366]: Removed session 8. Oct 2 20:39:48.795031 kernel: audit: type=1131 audit(1696279188.759:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.44:22-10.200.12.6:54086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:48.806850 systemd[1]: Started sshd@6-10.200.20.44:22-10.200.12.6:54090.service. Oct 2 20:39:48.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.44:22-10.200.12.6:54090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:49.220000 audit[1725]: USER_ACCT pid=1725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:39:49.221590 sshd[1725]: Accepted publickey for core from 10.200.12.6 port 54090 ssh2: RSA SHA256:pOhi17uv1dMw9wbwzof49dIVAjOqWAX9EZnbvXjLyxI Oct 2 20:39:49.222000 audit[1725]: CRED_ACQ pid=1725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:39:49.222000 audit[1725]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6872e90 a2=3 a3=1 items=0 ppid=1 pid=1725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:49.222000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 20:39:49.223418 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:39:49.227588 systemd[1]: Started session-9.scope. Oct 2 20:39:49.228813 systemd-logind[1366]: New session 9 of user core. Oct 2 20:39:49.231000 audit[1725]: USER_START pid=1725 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:39:49.234000 audit[1727]: CRED_ACQ pid=1727 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:39:49.461000 audit[1728]: USER_ACCT pid=1728 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:39:49.462807 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 20:39:49.461000 audit[1728]: CRED_REFR pid=1728 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:39:49.463012 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:39:49.463000 audit[1728]: USER_START pid=1728 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:39:50.048495 systemd[1]: Reloading. Oct 2 20:39:50.155486 /usr/lib/systemd/system-generators/torcx-generator[1757]: time="2023-10-02T20:39:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:39:50.158066 /usr/lib/systemd/system-generators/torcx-generator[1757]: time="2023-10-02T20:39:50Z" level=info msg="torcx already run" Oct 2 20:39:50.266251 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:39:50.266269 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:39:50.282847 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.341000 audit: BPF prog-id=38 op=LOAD Oct 2 20:39:50.342000 audit: BPF prog-id=35 op=UNLOAD Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.342000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit: BPF prog-id=39 op=LOAD Oct 2 20:39:50.343000 audit: BPF prog-id=32 op=UNLOAD Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit: BPF prog-id=40 op=LOAD Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit: BPF prog-id=41 op=LOAD Oct 2 20:39:50.343000 audit: BPF prog-id=33 op=UNLOAD Oct 2 20:39:50.343000 audit: BPF prog-id=34 op=UNLOAD Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.343000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit: BPF prog-id=42 op=LOAD Oct 2 20:39:50.344000 audit: BPF prog-id=21 op=UNLOAD Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit: BPF prog-id=43 op=LOAD Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit: BPF prog-id=44 op=LOAD Oct 2 20:39:50.344000 audit: BPF prog-id=22 op=UNLOAD Oct 2 20:39:50.344000 audit: BPF prog-id=23 op=UNLOAD Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit: BPF prog-id=45 op=LOAD Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.344000 audit: BPF prog-id=46 op=LOAD Oct 2 20:39:50.344000 audit: BPF prog-id=24 op=UNLOAD Oct 2 20:39:50.344000 audit: BPF prog-id=25 op=UNLOAD Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.345000 audit: BPF prog-id=47 op=LOAD Oct 2 20:39:50.345000 audit: BPF prog-id=30 op=UNLOAD Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit: BPF prog-id=48 op=LOAD Oct 2 20:39:50.347000 audit: BPF prog-id=27 op=UNLOAD Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit: BPF prog-id=49 op=LOAD Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit: BPF prog-id=50 op=LOAD Oct 2 20:39:50.347000 audit: BPF prog-id=28 op=UNLOAD Oct 2 20:39:50.347000 audit: BPF prog-id=29 op=UNLOAD Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.347000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.348000 audit: BPF prog-id=51 op=LOAD Oct 2 20:39:50.348000 audit: BPF prog-id=31 op=UNLOAD Oct 2 20:39:50.348000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:50.349000 audit: BPF prog-id=52 op=LOAD Oct 2 20:39:50.349000 audit: BPF prog-id=37 op=UNLOAD Oct 2 20:39:50.356648 systemd[1]: Started kubelet.service. Oct 2 20:39:50.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:50.372398 systemd[1]: Starting coreos-metadata.service... Oct 2 20:39:50.413328 coreos-metadata[1825]: Oct 02 20:39:50.413 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Oct 2 20:39:50.416180 coreos-metadata[1825]: Oct 02 20:39:50.416 INFO Fetch successful Oct 2 20:39:50.416337 coreos-metadata[1825]: Oct 02 20:39:50.416 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Oct 2 20:39:50.417878 coreos-metadata[1825]: Oct 02 20:39:50.417 INFO Fetch successful Oct 2 20:39:50.418226 coreos-metadata[1825]: Oct 02 20:39:50.418 INFO Fetching http://168.63.129.16/machine/f3f0b91e-c448-47f6-98d8-cf218891d019/5cf1f1dd%2D261c%2D4083%2Db509%2Dfebf83903b3e.%5Fci%2D3510.3.0%2Da%2Db6df30be81?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Oct 2 20:39:50.420002 coreos-metadata[1825]: Oct 02 20:39:50.419 INFO Fetch successful Oct 2 20:39:50.438614 kubelet[1817]: E1002 20:39:50.438562 1817 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 20:39:50.440465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 20:39:50.440589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 20:39:50.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 20:39:50.456101 coreos-metadata[1825]: Oct 02 20:39:50.455 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Oct 2 20:39:50.469391 coreos-metadata[1825]: Oct 02 20:39:50.469 INFO Fetch successful Oct 2 20:39:50.480619 systemd[1]: Finished coreos-metadata.service. Oct 2 20:39:50.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:51.556074 systemd[1]: Stopped kubelet.service. Oct 2 20:39:51.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:51.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:51.574347 systemd[1]: Reloading. Oct 2 20:39:51.674741 /usr/lib/systemd/system-generators/torcx-generator[1880]: time="2023-10-02T20:39:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:39:51.674768 /usr/lib/systemd/system-generators/torcx-generator[1880]: time="2023-10-02T20:39:51Z" level=info msg="torcx already run" Oct 2 20:39:51.757285 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:39:51.757303 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:39:51.773869 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.833000 audit: BPF prog-id=53 op=LOAD Oct 2 20:39:51.833000 audit: BPF prog-id=38 op=UNLOAD Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit: BPF prog-id=54 op=LOAD Oct 2 20:39:51.834000 audit: BPF prog-id=39 op=UNLOAD Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit: BPF prog-id=55 op=LOAD Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.834000 audit: BPF prog-id=56 op=LOAD Oct 2 20:39:51.834000 audit: BPF prog-id=40 op=UNLOAD Oct 2 20:39:51.834000 audit: BPF prog-id=41 op=UNLOAD Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit: BPF prog-id=57 op=LOAD Oct 2 20:39:51.836000 audit: BPF prog-id=42 op=UNLOAD Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit: BPF prog-id=58 op=LOAD Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit: BPF prog-id=59 op=LOAD Oct 2 20:39:51.836000 audit: BPF prog-id=43 op=UNLOAD Oct 2 20:39:51.836000 audit: BPF prog-id=44 op=UNLOAD Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit: BPF prog-id=60 op=LOAD Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit: BPF prog-id=61 op=LOAD Oct 2 20:39:51.836000 audit: BPF prog-id=45 op=UNLOAD Oct 2 20:39:51.836000 audit: BPF prog-id=46 op=UNLOAD Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.836000 audit: BPF prog-id=62 op=LOAD Oct 2 20:39:51.836000 audit: BPF prog-id=47 op=UNLOAD Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit: BPF prog-id=63 op=LOAD Oct 2 20:39:51.839000 audit: BPF prog-id=48 op=UNLOAD Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit: BPF prog-id=64 op=LOAD Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit: BPF prog-id=65 op=LOAD Oct 2 20:39:51.839000 audit: BPF prog-id=49 op=UNLOAD Oct 2 20:39:51.839000 audit: BPF prog-id=50 op=UNLOAD Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.839000 audit: BPF prog-id=66 op=LOAD Oct 2 20:39:51.839000 audit: BPF prog-id=51 op=UNLOAD Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:51.841000 audit: BPF prog-id=67 op=LOAD Oct 2 20:39:51.841000 audit: BPF prog-id=52 op=UNLOAD Oct 2 20:39:51.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:39:51.857331 systemd[1]: Started kubelet.service. Oct 2 20:39:51.920376 kubelet[1940]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 20:39:51.920376 kubelet[1940]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:39:51.920376 kubelet[1940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:39:51.920678 kubelet[1940]: I1002 20:39:51.920457 1940 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 20:39:51.921610 kubelet[1940]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 20:39:51.921610 kubelet[1940]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:39:51.921610 kubelet[1940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:39:52.878116 kubelet[1940]: I1002 20:39:52.878086 1940 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 20:39:52.878273 kubelet[1940]: I1002 20:39:52.878261 1940 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 20:39:52.878542 kubelet[1940]: I1002 20:39:52.878528 1940 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 20:39:52.881139 kubelet[1940]: I1002 20:39:52.881111 1940 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 20:39:52.883235 kubelet[1940]: W1002 20:39:52.883218 1940 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 20:39:52.883775 kubelet[1940]: I1002 20:39:52.883758 1940 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 20:39:52.884074 kubelet[1940]: I1002 20:39:52.884061 1940 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 20:39:52.884243 kubelet[1940]: I1002 20:39:52.884229 1940 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 20:39:52.884399 kubelet[1940]: I1002 20:39:52.884387 1940 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 20:39:52.884467 kubelet[1940]: I1002 20:39:52.884457 1940 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 20:39:52.884604 kubelet[1940]: I1002 20:39:52.884592 1940 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:39:52.887635 kubelet[1940]: I1002 20:39:52.887617 1940 kubelet.go:381] "Attempting to sync node with API server" Oct 2 20:39:52.887736 kubelet[1940]: I1002 20:39:52.887725 1940 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 20:39:52.887809 kubelet[1940]: I1002 20:39:52.887800 1940 kubelet.go:281] "Adding apiserver pod source" Oct 2 20:39:52.887869 kubelet[1940]: I1002 20:39:52.887860 1940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 20:39:52.888671 kubelet[1940]: E1002 20:39:52.888652 1940 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:52.888811 kubelet[1940]: E1002 20:39:52.888800 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:52.889569 kubelet[1940]: I1002 20:39:52.889551 1940 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 20:39:52.890045 kubelet[1940]: W1002 20:39:52.890030 1940 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 20:39:52.890554 kubelet[1940]: I1002 20:39:52.890536 1940 server.go:1175] "Started kubelet" Oct 2 20:39:52.892000 audit[1940]: AVC avc: denied { mac_admin } for pid=1940 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:52.892000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:39:52.892000 audit[1940]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000bfc810 a1=40000e19e0 a2=4000bfc7e0 a3=25 items=0 ppid=1 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.892000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:39:52.894214 kubelet[1940]: I1002 20:39:52.894196 1940 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 20:39:52.894389 kubelet[1940]: I1002 20:39:52.894377 1940 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 20:39:52.893000 audit[1940]: AVC avc: denied { mac_admin } for pid=1940 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:52.893000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:39:52.893000 audit[1940]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000addea0 a1=4000adf1b8 a2=4000abf080 a3=25 items=0 ppid=1 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.894793 kubelet[1940]: E1002 20:39:52.894781 1940 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 20:39:52.894886 kubelet[1940]: E1002 20:39:52.894876 1940 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 20:39:52.895100 kubelet[1940]: I1002 20:39:52.895078 1940 server.go:438] "Adding debug handlers to kubelet server" Oct 2 20:39:52.893000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:39:52.896181 kubelet[1940]: I1002 20:39:52.896170 1940 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 20:39:52.896425 kubelet[1940]: I1002 20:39:52.896412 1940 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 20:39:52.896910 kubelet[1940]: I1002 20:39:52.896899 1940 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 20:39:52.899102 kubelet[1940]: E1002 20:39:52.899084 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:39:52.900563 kubelet[1940]: I1002 20:39:52.900527 1940 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 20:39:52.906000 audit[1954]: NETFILTER_CFG table=mangle:6 family=2 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.906000 audit[1954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeb8e5640 a2=0 a3=1 items=0 ppid=1940 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:39:52.912000 audit[1955]: NETFILTER_CFG table=filter:7 family=2 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.912000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffce008720 a2=0 a3=1 items=0 ppid=1940 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.912000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:39:52.918000 audit[1957]: NETFILTER_CFG table=filter:8 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.918000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdce64170 a2=0 a3=1 items=0 ppid=1940 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:39:52.921000 audit[1959]: NETFILTER_CFG table=filter:9 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.921000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd2d535a0 a2=0 a3=1 items=0 ppid=1940 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.921000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:39:52.929395 kubelet[1940]: E1002 20:39:52.929347 1940 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.20.44" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:39:52.929638 kubelet[1940]: W1002 20:39:52.929423 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:39:52.929638 kubelet[1940]: E1002 20:39:52.929443 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:39:52.929638 kubelet[1940]: W1002 20:39:52.929478 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:39:52.929638 kubelet[1940]: E1002 20:39:52.929500 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:39:52.929638 kubelet[1940]: W1002 20:39:52.929524 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.20.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:39:52.929638 kubelet[1940]: E1002 20:39:52.929532 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:39:52.929807 kubelet[1940]: E1002 20:39:52.929551 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04c202655", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 890517077, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 890517077, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:52.930359 kubelet[1940]: E1002 20:39:52.930303 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04c6283e9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 894866409, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 894866409, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:52.930654 kubelet[1940]: I1002 20:39:52.930637 1940 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 20:39:52.930765 kubelet[1940]: I1002 20:39:52.930721 1940 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 20:39:52.930963 kubelet[1940]: I1002 20:39:52.930950 1940 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:39:52.931404 kubelet[1940]: E1002 20:39:52.931331 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a4fbb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.44 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929980347, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929980347, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:52.932010 kubelet[1940]: E1002 20:39:52.931934 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a9a43", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.44 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929999427, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929999427, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:52.932740 kubelet[1940]: E1002 20:39:52.932675 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7aa6eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.44 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 930002667, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 930002667, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:52.936794 kubelet[1940]: I1002 20:39:52.936777 1940 policy_none.go:49] "None policy: Start" Oct 2 20:39:52.937432 kubelet[1940]: I1002 20:39:52.937418 1940 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 20:39:52.937534 kubelet[1940]: I1002 20:39:52.937524 1940 state_mem.go:35] "Initializing new in-memory state store" Oct 2 20:39:52.941000 audit[1965]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.941000 audit[1965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe6781630 a2=0 a3=1 items=0 ppid=1940 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 20:39:52.945000 audit[1966]: NETFILTER_CFG table=nat:11 family=2 entries=2 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.945000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd6f461d0 a2=0 a3=1 items=0 ppid=1940 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:39:52.947538 systemd[1]: Created slice kubepods.slice. Oct 2 20:39:52.951349 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 20:39:52.954261 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 20:39:52.961516 kubelet[1940]: I1002 20:39:52.961486 1940 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 20:39:52.960000 audit[1940]: AVC avc: denied { mac_admin } for pid=1940 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:39:52.960000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:39:52.960000 audit[1940]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e30630 a1=400051e7f8 a2=4000e305d0 a3=25 items=0 ppid=1 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.960000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:39:52.961745 kubelet[1940]: I1002 20:39:52.961544 1940 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 20:39:52.961745 kubelet[1940]: I1002 20:39:52.961682 1940 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 20:39:52.962950 kubelet[1940]: E1002 20:39:52.962914 1940 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.44\" not found" Oct 2 20:39:52.965824 kubelet[1940]: E1002 20:39:52.965120 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f050861cf2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 964308210, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 964308210, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:52.971000 audit[1969]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.971000 audit[1969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff8e5e680 a2=0 a3=1 items=0 ppid=1940 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:39:52.988000 audit[1972]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.988000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffff80ed6b0 a2=0 a3=1 items=0 ppid=1940 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:39:52.989000 audit[1973]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.989000 audit[1973]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffef62ee30 a2=0 a3=1 items=0 ppid=1940 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:39:52.991000 audit[1974]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.991000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee5f6020 a2=0 a3=1 items=0 ppid=1940 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:39:52.994000 audit[1976]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.994000 audit[1976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffebc0f210 a2=0 a3=1 items=0 ppid=1940 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.994000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:39:52.997241 kubelet[1940]: E1002 20:39:52.997210 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:52.997744 kubelet[1940]: I1002 20:39:52.997717 1940 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.44" Oct 2 20:39:52.998694 kubelet[1940]: E1002 20:39:52.998664 1940 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.44" Oct 2 20:39:52.998922 kubelet[1940]: E1002 20:39:52.998859 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a4fbb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.44 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929980347, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 997673319, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a4fbb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:52.999613 kubelet[1940]: E1002 20:39:52.999536 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a9a43", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.44 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929999427, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 997691159, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a9a43" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:53.000222 kubelet[1940]: E1002 20:39:53.000166 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7aa6eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.44 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 930002667, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 997694719, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7aa6eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:52.999000 audit[1978]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:52.999000 audit[1978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc7633140 a2=0 a3=1 items=0 ppid=1940 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:52.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:39:53.030000 audit[1981]: NETFILTER_CFG table=nat:18 family=2 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:53.030000 audit[1981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffdc7f06a0 a2=0 a3=1 items=0 ppid=1940 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:39:53.033000 audit[1983]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:53.033000 audit[1983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff9b05c00 a2=0 a3=1 items=0 ppid=1940 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.033000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:39:53.057000 audit[1986]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_rule pid=1986 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:53.057000 audit[1986]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffff1eb7f00 a2=0 a3=1 items=0 ppid=1940 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:39:53.058468 kubelet[1940]: I1002 20:39:53.058428 1940 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 20:39:53.059000 audit[1987]: NETFILTER_CFG table=mangle:21 family=10 entries=2 op=nft_register_chain pid=1987 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.059000 audit[1987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcdfd8f00 a2=0 a3=1 items=0 ppid=1940 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:39:53.059000 audit[1988]: NETFILTER_CFG table=mangle:22 family=2 entries=1 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:53.059000 audit[1988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce76c310 a2=0 a3=1 items=0 ppid=1940 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:39:53.061000 audit[1989]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.061000 audit[1989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcb31f770 a2=0 a3=1 items=0 ppid=1940 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:39:53.061000 audit[1990]: NETFILTER_CFG table=nat:24 family=2 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:53.061000 audit[1990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5380470 a2=0 a3=1 items=0 ppid=1940 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:39:53.063000 audit[1992]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:39:53.063000 audit[1992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc8bf4cb0 a2=0 a3=1 items=0 ppid=1940 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:39:53.064000 audit[1993]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.064000 audit[1993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcabec5b0 a2=0 a3=1 items=0 ppid=1940 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.064000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:39:53.066000 audit[1994]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.066000 audit[1994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe8ad3fd0 a2=0 a3=1 items=0 ppid=1940 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.066000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:39:53.070000 audit[1996]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.070000 audit[1996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffcc8442a0 a2=0 a3=1 items=0 ppid=1940 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.070000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:39:53.071000 audit[1997]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_chain pid=1997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.071000 audit[1997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff6b450c0 a2=0 a3=1 items=0 ppid=1940 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.071000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:39:53.073000 audit[1998]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.073000 audit[1998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2604c20 a2=0 a3=1 items=0 ppid=1940 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.073000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:39:53.076000 audit[2000]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.076000 audit[2000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdd3ea8a0 a2=0 a3=1 items=0 ppid=1940 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:39:53.080000 audit[2002]: NETFILTER_CFG table=nat:32 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.080000 audit[2002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd0989fb0 a2=0 a3=1 items=0 ppid=1940 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:39:53.083000 audit[2004]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_rule pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.083000 audit[2004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffc6128a60 a2=0 a3=1 items=0 ppid=1940 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.083000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:39:53.086000 audit[2006]: NETFILTER_CFG table=nat:34 family=10 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.086000 audit[2006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffdf0af210 a2=0 a3=1 items=0 ppid=1940 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:39:53.095000 audit[2008]: NETFILTER_CFG table=nat:35 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.095000 audit[2008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=fffff68fb7e0 a2=0 a3=1 items=0 ppid=1940 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:39:53.096685 kubelet[1940]: I1002 20:39:53.096665 1940 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 20:39:53.096807 kubelet[1940]: I1002 20:39:53.096796 1940 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 20:39:53.096881 kubelet[1940]: I1002 20:39:53.096871 1940 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 20:39:53.097018 kubelet[1940]: E1002 20:39:53.096997 1940 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 20:39:53.097305 kubelet[1940]: E1002 20:39:53.097286 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.098361 kubelet[1940]: W1002 20:39:53.098340 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:39:53.098464 kubelet[1940]: E1002 20:39:53.098454 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:39:53.097000 audit[2009]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.097000 audit[2009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd522e9c0 a2=0 a3=1 items=0 ppid=1940 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:39:53.099000 audit[2010]: NETFILTER_CFG table=nat:37 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.099000 audit[2010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff568d200 a2=0 a3=1 items=0 ppid=1940 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:39:53.100000 audit[2011]: NETFILTER_CFG table=filter:38 family=10 entries=1 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:39:53.100000 audit[2011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4494b30 a2=0 a3=1 items=0 ppid=1940 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:39:53.100000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:39:53.131375 kubelet[1940]: E1002 20:39:53.131295 1940 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.20.44" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:39:53.197541 kubelet[1940]: E1002 20:39:53.197507 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.199287 kubelet[1940]: I1002 20:39:53.199263 1940 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.44" Oct 2 20:39:53.200213 kubelet[1940]: E1002 20:39:53.200193 1940 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.44" Oct 2 20:39:53.200657 kubelet[1940]: E1002 20:39:53.200586 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a4fbb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.44 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929980347, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 53, 199229042, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a4fbb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:53.201480 kubelet[1940]: E1002 20:39:53.201425 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a9a43", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.44 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929999427, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 53, 199241282, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a9a43" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:53.293671 kubelet[1940]: E1002 20:39:53.293598 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7aa6eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.44 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 930002667, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 53, 199244282, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7aa6eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:53.297710 kubelet[1940]: E1002 20:39:53.297692 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.398384 kubelet[1940]: E1002 20:39:53.398349 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.498972 kubelet[1940]: E1002 20:39:53.498940 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.533310 kubelet[1940]: E1002 20:39:53.533282 1940 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.20.44" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:39:53.599496 kubelet[1940]: E1002 20:39:53.599466 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.601187 kubelet[1940]: I1002 20:39:53.601168 1940 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.44" Oct 2 20:39:53.602315 kubelet[1940]: E1002 20:39:53.602290 1940 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.44" Oct 2 20:39:53.602458 kubelet[1940]: E1002 20:39:53.602388 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a4fbb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.44 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929980347, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 53, 601130419, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a4fbb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:53.693593 kubelet[1940]: E1002 20:39:53.693455 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a9a43", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.44 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929999427, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 53, 601144339, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a9a43" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:53.700667 kubelet[1940]: E1002 20:39:53.700631 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.801114 kubelet[1940]: E1002 20:39:53.801086 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.816296 kubelet[1940]: W1002 20:39:53.816268 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.20.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:39:53.816296 kubelet[1940]: E1002 20:39:53.816298 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:39:53.836234 kubelet[1940]: W1002 20:39:53.836210 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:39:53.836234 kubelet[1940]: E1002 20:39:53.836247 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:39:53.889579 kubelet[1940]: E1002 20:39:53.889555 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:53.893507 kubelet[1940]: E1002 20:39:53.893404 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7aa6eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.44 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 930002667, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 53, 601147499, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7aa6eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:53.901881 kubelet[1940]: E1002 20:39:53.901862 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:53.995251 kubelet[1940]: W1002 20:39:53.995145 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:39:53.995251 kubelet[1940]: E1002 20:39:53.995172 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:39:54.002596 kubelet[1940]: E1002 20:39:54.002566 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.102880 kubelet[1940]: E1002 20:39:54.102860 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.203489 kubelet[1940]: E1002 20:39:54.203468 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.304146 kubelet[1940]: E1002 20:39:54.304082 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.334274 kubelet[1940]: E1002 20:39:54.334250 1940 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.20.44" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:39:54.403646 kubelet[1940]: I1002 20:39:54.403626 1940 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.44" Oct 2 20:39:54.404220 kubelet[1940]: E1002 20:39:54.404193 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.404710 kubelet[1940]: E1002 20:39:54.404694 1940 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.44" Oct 2 20:39:54.404881 kubelet[1940]: E1002 20:39:54.404818 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a4fbb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.44 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929980347, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 54, 403587987, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a4fbb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:54.405772 kubelet[1940]: E1002 20:39:54.405716 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a9a43", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.44 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929999427, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 54, 403598107, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a9a43" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:54.494365 kubelet[1940]: E1002 20:39:54.494294 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7aa6eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.44 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 930002667, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 54, 403601587, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7aa6eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:54.504917 kubelet[1940]: E1002 20:39:54.504903 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.605739 kubelet[1940]: E1002 20:39:54.605665 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.673638 kubelet[1940]: W1002 20:39:54.673614 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:39:54.673781 kubelet[1940]: E1002 20:39:54.673771 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:39:54.706201 kubelet[1940]: E1002 20:39:54.706187 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.806916 kubelet[1940]: E1002 20:39:54.806896 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:54.890508 kubelet[1940]: E1002 20:39:54.890480 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:54.907941 kubelet[1940]: E1002 20:39:54.907927 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.008497 kubelet[1940]: E1002 20:39:55.008472 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.108852 kubelet[1940]: E1002 20:39:55.108835 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.209683 kubelet[1940]: E1002 20:39:55.209619 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.310421 kubelet[1940]: E1002 20:39:55.310401 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.411263 kubelet[1940]: E1002 20:39:55.411246 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.512016 kubelet[1940]: E1002 20:39:55.511939 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.564259 kubelet[1940]: W1002 20:39:55.564238 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.20.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:39:55.564410 kubelet[1940]: E1002 20:39:55.564397 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:39:55.612802 kubelet[1940]: E1002 20:39:55.612786 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.713355 kubelet[1940]: E1002 20:39:55.713340 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.757561 kubelet[1940]: W1002 20:39:55.757543 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:39:55.757674 kubelet[1940]: E1002 20:39:55.757664 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:39:55.813944 kubelet[1940]: E1002 20:39:55.813887 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.891284 kubelet[1940]: E1002 20:39:55.891253 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:55.914715 kubelet[1940]: E1002 20:39:55.914700 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:55.935671 kubelet[1940]: E1002 20:39:55.935647 1940 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.20.44" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:39:56.005658 kubelet[1940]: I1002 20:39:56.005633 1940 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.44" Oct 2 20:39:56.006412 kubelet[1940]: E1002 20:39:56.006395 1940 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.44" Oct 2 20:39:56.006768 kubelet[1940]: E1002 20:39:56.006700 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a4fbb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.44 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929980347, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 56, 5602010, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a4fbb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:56.007550 kubelet[1940]: E1002 20:39:56.007498 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a9a43", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.44 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929999427, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 56, 5608249, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a9a43" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:56.008307 kubelet[1940]: E1002 20:39:56.008244 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7aa6eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.44 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 930002667, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 56, 5610968, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7aa6eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:56.015370 kubelet[1940]: E1002 20:39:56.015357 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.116728 kubelet[1940]: E1002 20:39:56.116660 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.217447 kubelet[1940]: E1002 20:39:56.217426 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.318187 kubelet[1940]: E1002 20:39:56.318167 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.419054 kubelet[1940]: E1002 20:39:56.419036 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.519749 kubelet[1940]: E1002 20:39:56.519732 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.620461 kubelet[1940]: E1002 20:39:56.620445 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.721206 kubelet[1940]: E1002 20:39:56.721139 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.821782 kubelet[1940]: E1002 20:39:56.821754 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:56.892193 kubelet[1940]: E1002 20:39:56.892160 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:56.922631 kubelet[1940]: E1002 20:39:56.922609 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.023573 kubelet[1940]: E1002 20:39:57.023490 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.044756 kubelet[1940]: W1002 20:39:57.044727 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:39:57.044756 kubelet[1940]: E1002 20:39:57.044759 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:39:57.124086 kubelet[1940]: E1002 20:39:57.124054 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.224650 kubelet[1940]: E1002 20:39:57.224626 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.325233 kubelet[1940]: E1002 20:39:57.325169 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.425926 kubelet[1940]: E1002 20:39:57.425891 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.454157 kubelet[1940]: W1002 20:39:57.454132 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:39:57.454157 kubelet[1940]: E1002 20:39:57.454162 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:39:57.526539 kubelet[1940]: E1002 20:39:57.526512 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.627209 kubelet[1940]: E1002 20:39:57.627119 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.727780 kubelet[1940]: E1002 20:39:57.727756 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.828342 kubelet[1940]: E1002 20:39:57.828320 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.892808 kubelet[1940]: E1002 20:39:57.892771 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:57.929268 kubelet[1940]: E1002 20:39:57.929248 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:57.962916 kubelet[1940]: E1002 20:39:57.962891 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:39:58.029583 kubelet[1940]: E1002 20:39:58.029558 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.130140 kubelet[1940]: E1002 20:39:58.130113 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.230749 kubelet[1940]: E1002 20:39:58.230682 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.331277 kubelet[1940]: E1002 20:39:58.331243 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.432019 kubelet[1940]: E1002 20:39:58.431981 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.532630 kubelet[1940]: E1002 20:39:58.532555 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.633228 kubelet[1940]: E1002 20:39:58.633201 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.733807 kubelet[1940]: E1002 20:39:58.733774 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.834396 kubelet[1940]: E1002 20:39:58.834331 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:58.893800 kubelet[1940]: E1002 20:39:58.893760 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:58.935111 kubelet[1940]: E1002 20:39:58.935093 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.035815 kubelet[1940]: E1002 20:39:59.035793 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.136512 kubelet[1940]: E1002 20:39:59.136485 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.137531 kubelet[1940]: E1002 20:39:59.137505 1940 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.20.44" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:39:59.207618 kubelet[1940]: I1002 20:39:59.207597 1940 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.44" Oct 2 20:39:59.208778 kubelet[1940]: E1002 20:39:59.208755 1940 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.20.44" Oct 2 20:39:59.208933 kubelet[1940]: E1002 20:39:59.208848 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a4fbb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.20.44 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929980347, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 59, 207565701, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a4fbb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:59.209698 kubelet[1940]: E1002 20:39:59.209639 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7a9a43", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.20.44 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 929999427, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 59, 207570300, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7a9a43" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:59.210336 kubelet[1940]: E1002 20:39:59.210282 1940 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.20.44.178a64f04e7aa6eb", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.20.44", UID:"10.200.20.44", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.20.44 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.20.44"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 39, 52, 930002667, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 39, 59, 207573099, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.20.44.178a64f04e7aa6eb" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:39:59.237490 kubelet[1940]: E1002 20:39:59.237472 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.338026 kubelet[1940]: E1002 20:39:59.338001 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.438813 kubelet[1940]: E1002 20:39:59.438725 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.539400 kubelet[1940]: E1002 20:39:59.539371 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.639961 kubelet[1940]: E1002 20:39:59.639940 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.740590 kubelet[1940]: E1002 20:39:59.740508 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.841126 kubelet[1940]: E1002 20:39:59.841089 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:39:59.894623 kubelet[1940]: E1002 20:39:59.894589 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:39:59.942187 kubelet[1940]: E1002 20:39:59.942154 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.042853 kubelet[1940]: E1002 20:40:00.042775 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.143383 kubelet[1940]: E1002 20:40:00.143357 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.243922 kubelet[1940]: E1002 20:40:00.243899 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.344503 kubelet[1940]: E1002 20:40:00.344437 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.445253 kubelet[1940]: E1002 20:40:00.445225 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.545784 kubelet[1940]: E1002 20:40:00.545756 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.646370 kubelet[1940]: E1002 20:40:00.646340 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.746967 kubelet[1940]: E1002 20:40:00.746939 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.778231 kubelet[1940]: W1002 20:40:00.778178 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:40:00.778231 kubelet[1940]: E1002 20:40:00.778206 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:40:00.844444 kubelet[1940]: W1002 20:40:00.844423 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.200.20.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:40:00.844444 kubelet[1940]: E1002 20:40:00.844448 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.20.44" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:40:00.847524 kubelet[1940]: E1002 20:40:00.847502 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:00.894979 kubelet[1940]: E1002 20:40:00.894923 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:00.948411 kubelet[1940]: E1002 20:40:00.948331 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.049073 kubelet[1940]: E1002 20:40:01.049044 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.149793 kubelet[1940]: E1002 20:40:01.149769 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.234608 kubelet[1940]: W1002 20:40:01.234510 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:40:01.234865 kubelet[1940]: E1002 20:40:01.234848 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:40:01.250921 kubelet[1940]: E1002 20:40:01.250898 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.319460 kubelet[1940]: W1002 20:40:01.319435 1940 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:40:01.319604 kubelet[1940]: E1002 20:40:01.319591 1940 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:40:01.351503 kubelet[1940]: E1002 20:40:01.351483 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.452197 kubelet[1940]: E1002 20:40:01.452174 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.552844 kubelet[1940]: E1002 20:40:01.552754 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.653383 kubelet[1940]: E1002 20:40:01.653353 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.753906 kubelet[1940]: E1002 20:40:01.753880 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.854514 kubelet[1940]: E1002 20:40:01.854443 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:01.895911 kubelet[1940]: E1002 20:40:01.895877 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:01.955499 kubelet[1940]: E1002 20:40:01.955474 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.056247 kubelet[1940]: E1002 20:40:02.056221 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.157372 kubelet[1940]: E1002 20:40:02.157347 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.257922 kubelet[1940]: E1002 20:40:02.257902 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.358451 kubelet[1940]: E1002 20:40:02.358433 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.459227 kubelet[1940]: E1002 20:40:02.459144 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.559834 kubelet[1940]: E1002 20:40:02.559812 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.660438 kubelet[1940]: E1002 20:40:02.660421 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.761078 kubelet[1940]: E1002 20:40:02.761012 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.861674 kubelet[1940]: E1002 20:40:02.861651 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.880972 kubelet[1940]: I1002 20:40:02.880933 1940 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 20:40:02.896171 kubelet[1940]: E1002 20:40:02.896155 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:02.962497 kubelet[1940]: E1002 20:40:02.962460 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:02.963727 kubelet[1940]: E1002 20:40:02.963687 1940 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.20.44\" not found" Oct 2 20:40:02.964152 kubelet[1940]: E1002 20:40:02.964137 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:03.063498 kubelet[1940]: E1002 20:40:03.063258 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.164877 kubelet[1940]: E1002 20:40:03.164848 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.265373 kubelet[1940]: E1002 20:40:03.265353 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.287862 kubelet[1940]: E1002 20:40:03.287844 1940 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.44" not found Oct 2 20:40:03.366382 kubelet[1940]: E1002 20:40:03.366156 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.466800 kubelet[1940]: E1002 20:40:03.466769 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.567268 kubelet[1940]: E1002 20:40:03.567244 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.667914 kubelet[1940]: E1002 20:40:03.667891 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.768297 kubelet[1940]: E1002 20:40:03.768271 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.868813 kubelet[1940]: E1002 20:40:03.868787 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:03.897245 kubelet[1940]: E1002 20:40:03.897215 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:03.969778 kubelet[1940]: E1002 20:40:03.969554 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.070108 kubelet[1940]: E1002 20:40:04.070084 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.170541 kubelet[1940]: E1002 20:40:04.170511 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.271010 kubelet[1940]: E1002 20:40:04.270774 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.333244 kubelet[1940]: E1002 20:40:04.333217 1940 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.20.44" not found Oct 2 20:40:04.371360 kubelet[1940]: E1002 20:40:04.371337 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.472090 kubelet[1940]: E1002 20:40:04.472065 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.572604 kubelet[1940]: E1002 20:40:04.572349 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.672801 kubelet[1940]: E1002 20:40:04.672782 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.773566 kubelet[1940]: E1002 20:40:04.773529 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.874258 kubelet[1940]: E1002 20:40:04.874045 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:04.897433 kubelet[1940]: E1002 20:40:04.897414 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:04.974774 kubelet[1940]: E1002 20:40:04.974743 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.075251 kubelet[1940]: E1002 20:40:05.075233 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.176480 kubelet[1940]: E1002 20:40:05.176454 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.277237 kubelet[1940]: E1002 20:40:05.277215 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.378037 kubelet[1940]: E1002 20:40:05.378014 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.478902 kubelet[1940]: E1002 20:40:05.478713 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.541638 kubelet[1940]: E1002 20:40:05.541617 1940 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.20.44\" not found" node="10.200.20.44" Oct 2 20:40:05.579926 kubelet[1940]: E1002 20:40:05.579912 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.610056 kubelet[1940]: I1002 20:40:05.610041 1940 kubelet_node_status.go:70] "Attempting to register node" node="10.200.20.44" Oct 2 20:40:05.680788 kubelet[1940]: E1002 20:40:05.680767 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.734813 kubelet[1940]: I1002 20:40:05.734558 1940 kubelet_node_status.go:73] "Successfully registered node" node="10.200.20.44" Oct 2 20:40:05.781610 kubelet[1940]: E1002 20:40:05.781577 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.875000 audit[1728]: USER_END pid=1728 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:40:05.877325 sudo[1728]: pam_unix(sudo:session): session closed for user root Oct 2 20:40:05.881062 kernel: kauditd_printk_skb: 472 callbacks suppressed Oct 2 20:40:05.881216 kernel: audit: type=1106 audit(1696279205.875:579): pid=1728 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:40:05.882435 kubelet[1940]: E1002 20:40:05.882417 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.897687 kubelet[1940]: E1002 20:40:05.897674 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:05.875000 audit[1728]: CRED_DISP pid=1728 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:40:05.913913 kernel: audit: type=1104 audit(1696279205.875:580): pid=1728 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:40:05.943185 sshd[1725]: pam_unix(sshd:session): session closed for user core Oct 2 20:40:05.943000 audit[1725]: USER_END pid=1725 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:40:05.945779 systemd[1]: sshd@6-10.200.20.44:22-10.200.12.6:54090.service: Deactivated successfully. Oct 2 20:40:05.946609 systemd[1]: session-9.scope: Deactivated successfully. Oct 2 20:40:05.943000 audit[1725]: CRED_DISP pid=1725 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:40:05.969333 systemd-logind[1366]: Session 9 logged out. Waiting for processes to exit. Oct 2 20:40:05.983226 kubelet[1940]: E1002 20:40:05.983202 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:05.988687 kernel: audit: type=1106 audit(1696279205.943:581): pid=1725 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:40:05.988755 kernel: audit: type=1104 audit(1696279205.943:582): pid=1725 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Oct 2 20:40:05.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.44:22-10.200.12.6:54090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:40:06.008064 kernel: audit: type=1131 audit(1696279205.944:583): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.44:22-10.200.12.6:54090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:40:06.008420 systemd-logind[1366]: Removed session 9. Oct 2 20:40:06.084391 kubelet[1940]: E1002 20:40:06.084364 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.185228 kubelet[1940]: E1002 20:40:06.185197 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.285971 kubelet[1940]: E1002 20:40:06.285659 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.386644 kubelet[1940]: E1002 20:40:06.386617 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.487211 kubelet[1940]: E1002 20:40:06.487193 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.587915 kubelet[1940]: E1002 20:40:06.587625 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.688536 kubelet[1940]: E1002 20:40:06.688508 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.788976 kubelet[1940]: E1002 20:40:06.788958 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.889549 kubelet[1940]: E1002 20:40:06.889531 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:06.898804 kubelet[1940]: E1002 20:40:06.898771 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:06.990343 kubelet[1940]: E1002 20:40:06.990325 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.090815 kubelet[1940]: E1002 20:40:07.090788 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.192031 kubelet[1940]: E1002 20:40:07.191790 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.292318 kubelet[1940]: E1002 20:40:07.292283 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.392999 kubelet[1940]: E1002 20:40:07.392965 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.493761 kubelet[1940]: E1002 20:40:07.493551 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.594495 kubelet[1940]: E1002 20:40:07.594475 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.694977 kubelet[1940]: E1002 20:40:07.694948 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.795646 kubelet[1940]: E1002 20:40:07.795451 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.896065 kubelet[1940]: E1002 20:40:07.896044 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:07.899327 kubelet[1940]: E1002 20:40:07.899315 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:07.965117 kubelet[1940]: E1002 20:40:07.965084 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:07.996349 kubelet[1940]: E1002 20:40:07.996334 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:08.098765 kubelet[1940]: E1002 20:40:08.096739 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:08.197158 kubelet[1940]: E1002 20:40:08.197133 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:08.297603 kubelet[1940]: E1002 20:40:08.297583 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:08.397999 kubelet[1940]: E1002 20:40:08.397969 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:08.498322 kubelet[1940]: E1002 20:40:08.498301 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:08.598785 kubelet[1940]: E1002 20:40:08.598754 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:08.699481 kubelet[1940]: E1002 20:40:08.699204 1940 kubelet.go:2448] "Error getting node" err="node \"10.200.20.44\" not found" Oct 2 20:40:08.800268 kubelet[1940]: I1002 20:40:08.800240 1940 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 20:40:08.800712 env[1383]: time="2023-10-02T20:40:08.800676946Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 20:40:08.800973 kubelet[1940]: I1002 20:40:08.800898 1940 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 20:40:08.801312 kubelet[1940]: E1002 20:40:08.801191 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:08.899621 kubelet[1940]: E1002 20:40:08.899587 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:08.899757 kubelet[1940]: I1002 20:40:08.899594 1940 apiserver.go:52] "Watching apiserver" Oct 2 20:40:08.902199 kubelet[1940]: I1002 20:40:08.902175 1940 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:40:08.902281 kubelet[1940]: I1002 20:40:08.902276 1940 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:40:08.906460 systemd[1]: Created slice kubepods-burstable-pod14517ab4_0630_4cc4_b4f6_d38d30945409.slice. Oct 2 20:40:08.912620 systemd[1]: Created slice kubepods-besteffort-pod1b86df9e_d158_4170_8a82_2c244273c7d3.slice. Oct 2 20:40:09.101666 kubelet[1940]: I1002 20:40:09.101189 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-xtables-lock\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.101666 kubelet[1940]: I1002 20:40:09.101241 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-host-proc-sys-kernel\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.101666 kubelet[1940]: I1002 20:40:09.101264 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b86df9e-d158-4170-8a82-2c244273c7d3-kube-proxy\") pod \"kube-proxy-m9knr\" (UID: \"1b86df9e-d158-4170-8a82-2c244273c7d3\") " pod="kube-system/kube-proxy-m9knr" Oct 2 20:40:09.101666 kubelet[1940]: I1002 20:40:09.101284 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b86df9e-d158-4170-8a82-2c244273c7d3-xtables-lock\") pod \"kube-proxy-m9knr\" (UID: \"1b86df9e-d158-4170-8a82-2c244273c7d3\") " pod="kube-system/kube-proxy-m9knr" Oct 2 20:40:09.101666 kubelet[1940]: I1002 20:40:09.101305 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25nt\" (UniqueName: \"kubernetes.io/projected/1b86df9e-d158-4170-8a82-2c244273c7d3-kube-api-access-m25nt\") pod \"kube-proxy-m9knr\" (UID: \"1b86df9e-d158-4170-8a82-2c244273c7d3\") " pod="kube-system/kube-proxy-m9knr" Oct 2 20:40:09.101666 kubelet[1940]: I1002 20:40:09.101325 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-lib-modules\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102092 kubelet[1940]: I1002 20:40:09.101344 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-hostproc\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102092 kubelet[1940]: I1002 20:40:09.101362 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cni-path\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102092 kubelet[1940]: I1002 20:40:09.101384 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-etc-cni-netd\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102092 kubelet[1940]: I1002 20:40:09.101402 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-config-path\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102092 kubelet[1940]: I1002 20:40:09.101419 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-bpf-maps\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102092 kubelet[1940]: I1002 20:40:09.101439 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14517ab4-0630-4cc4-b4f6-d38d30945409-hubble-tls\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102222 kubelet[1940]: I1002 20:40:09.101459 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-cgroup\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102222 kubelet[1940]: I1002 20:40:09.101480 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14517ab4-0630-4cc4-b4f6-d38d30945409-clustermesh-secrets\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102222 kubelet[1940]: I1002 20:40:09.101499 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-host-proc-sys-net\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102222 kubelet[1940]: I1002 20:40:09.101518 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5zs\" (UniqueName: \"kubernetes.io/projected/14517ab4-0630-4cc4-b4f6-d38d30945409-kube-api-access-nm5zs\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102222 kubelet[1940]: I1002 20:40:09.101540 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b86df9e-d158-4170-8a82-2c244273c7d3-lib-modules\") pod \"kube-proxy-m9knr\" (UID: \"1b86df9e-d158-4170-8a82-2c244273c7d3\") " pod="kube-system/kube-proxy-m9knr" Oct 2 20:40:09.102222 kubelet[1940]: I1002 20:40:09.101559 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-run\") pod \"cilium-k4xvv\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " pod="kube-system/cilium-k4xvv" Oct 2 20:40:09.102351 kubelet[1940]: I1002 20:40:09.101566 1940 reconciler.go:169] "Reconciler: start to sync state" Oct 2 20:40:09.512462 env[1383]: time="2023-10-02T20:40:09.512420534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4xvv,Uid:14517ab4-0630-4cc4-b4f6-d38d30945409,Namespace:kube-system,Attempt:0,}" Oct 2 20:40:09.518293 env[1383]: time="2023-10-02T20:40:09.518255397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9knr,Uid:1b86df9e-d158-4170-8a82-2c244273c7d3,Namespace:kube-system,Attempt:0,}" Oct 2 20:40:09.900487 kubelet[1940]: E1002 20:40:09.900458 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:10.452936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436452077.mount: Deactivated successfully. Oct 2 20:40:10.479641 env[1383]: time="2023-10-02T20:40:10.479602826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:10.484867 env[1383]: time="2023-10-02T20:40:10.484843992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:10.498954 env[1383]: time="2023-10-02T20:40:10.498916793Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:10.504943 env[1383]: time="2023-10-02T20:40:10.504909776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:10.508928 env[1383]: time="2023-10-02T20:40:10.508904631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:10.513393 env[1383]: time="2023-10-02T20:40:10.513359103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:10.515631 env[1383]: time="2023-10-02T20:40:10.515607357Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:10.519371 env[1383]: time="2023-10-02T20:40:10.519338168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:10.580186 env[1383]: time="2023-10-02T20:40:10.580120200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:40:10.580362 env[1383]: time="2023-10-02T20:40:10.580331851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:40:10.580469 env[1383]: time="2023-10-02T20:40:10.580441716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:40:10.580702 env[1383]: time="2023-10-02T20:40:10.580648848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:40:10.580702 env[1383]: time="2023-10-02T20:40:10.580680124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:40:10.580784 env[1383]: time="2023-10-02T20:40:10.580703601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:40:10.580872 env[1383]: time="2023-10-02T20:40:10.580833543Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/076cfa6affe365cc26a94cfc9f08d909522187e71137d44dcdd6c9272bfa1c62 pid=2034 runtime=io.containerd.runc.v2 Oct 2 20:40:10.581103 env[1383]: time="2023-10-02T20:40:10.581055353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be pid=2032 runtime=io.containerd.runc.v2 Oct 2 20:40:10.598586 systemd[1]: Started cri-containerd-076cfa6affe365cc26a94cfc9f08d909522187e71137d44dcdd6c9272bfa1c62.scope. Oct 2 20:40:10.610174 systemd[1]: Started cri-containerd-2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be.scope. Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.654535 kernel: audit: type=1400 audit(1696279210.620:584): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.654626 kernel: audit: type=1400 audit(1696279210.620:585): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.672434 kernel: audit: type=1400 audit(1696279210.620:586): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.672542 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.693106 kernel: audit: type=1400 audit(1696279210.620:587): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.620000 audit: BPF prog-id=68 op=LOAD Oct 2 20:40:10.636000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.636000 audit[2051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400011db38 a2=10 a3=0 items=0 ppid=2034 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366366613661666665333635636332366139346366633966303864 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400011d5a0 a2=3c a3=0 items=0 ppid=2034 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366366613661666665333635636332366139346366633966303864 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit: BPF prog-id=69 op=LOAD Oct 2 20:40:10.637000 audit[2051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011d8e0 a2=78 a3=0 items=0 ppid=2034 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366366613661666665333635636332366139346366633966303864 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit: BPF prog-id=70 op=LOAD Oct 2 20:40:10.637000 audit[2051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400011d670 a2=78 a3=0 items=0 ppid=2034 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366366613661666665333635636332366139346366633966303864 Oct 2 20:40:10.637000 audit: BPF prog-id=70 op=UNLOAD Oct 2 20:40:10.637000 audit: BPF prog-id=69 op=UNLOAD Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { perfmon } for pid=2051 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit[2051]: AVC avc: denied { bpf } for pid=2051 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.637000 audit: BPF prog-id=71 op=LOAD Oct 2 20:40:10.637000 audit[2051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400011db40 a2=78 a3=0 items=0 ppid=2034 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037366366613661666665333635636332366139346366633966303864 Oct 2 20:40:10.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.654000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.654000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.676000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.695000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.695000 audit: BPF prog-id=72 op=LOAD Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=2032 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643563616536353365396438653036333536333339616435316234 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=2032 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643563616536353365396438653036333536333339616435316234 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit: BPF prog-id=73 op=LOAD Oct 2 20:40:10.696000 audit[2054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=2032 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643563616536353365396438653036333536333339616435316234 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.696000 audit: BPF prog-id=74 op=LOAD Oct 2 20:40:10.696000 audit[2054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=2032 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643563616536353365396438653036333536333339616435316234 Oct 2 20:40:10.696000 audit: BPF prog-id=74 op=UNLOAD Oct 2 20:40:10.697000 audit: BPF prog-id=73 op=UNLOAD Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { perfmon } for pid=2054 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit[2054]: AVC avc: denied { bpf } for pid=2054 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:10.697000 audit: BPF prog-id=75 op=LOAD Oct 2 20:40:10.697000 audit[2054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=2032 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:10.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643563616536353365396438653036333536333339616435316234 Oct 2 20:40:10.713022 env[1383]: time="2023-10-02T20:40:10.711108660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m9knr,Uid:1b86df9e-d158-4170-8a82-2c244273c7d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"076cfa6affe365cc26a94cfc9f08d909522187e71137d44dcdd6c9272bfa1c62\"" Oct 2 20:40:10.715325 env[1383]: time="2023-10-02T20:40:10.715285530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4xvv,Uid:14517ab4-0630-4cc4-b4f6-d38d30945409,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\"" Oct 2 20:40:10.715564 env[1383]: time="2023-10-02T20:40:10.715363359Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 20:40:10.901235 kubelet[1940]: E1002 20:40:10.901188 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:11.736573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3272379454.mount: Deactivated successfully. Oct 2 20:40:11.901451 kubelet[1940]: E1002 20:40:11.901410 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:12.134825 env[1383]: time="2023-10-02T20:40:12.134696398Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:12.155283 env[1383]: time="2023-10-02T20:40:12.155227502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:12.165784 env[1383]: time="2023-10-02T20:40:12.165759899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:12.173902 env[1383]: time="2023-10-02T20:40:12.173878169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:12.174319 env[1383]: time="2023-10-02T20:40:12.174295515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 20:40:12.175361 env[1383]: time="2023-10-02T20:40:12.175333740Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 20:40:12.176249 env[1383]: time="2023-10-02T20:40:12.176216666Z" level=info msg="CreateContainer within sandbox \"076cfa6affe365cc26a94cfc9f08d909522187e71137d44dcdd6c9272bfa1c62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 20:40:12.339089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27765841.mount: Deactivated successfully. Oct 2 20:40:12.362043 env[1383]: time="2023-10-02T20:40:12.361919842Z" level=info msg="CreateContainer within sandbox \"076cfa6affe365cc26a94cfc9f08d909522187e71137d44dcdd6c9272bfa1c62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"301029fbd177a17158f0257aec25b25c07c910a264042863de8d323bba286a0a\"" Oct 2 20:40:12.362719 env[1383]: time="2023-10-02T20:40:12.362684023Z" level=info msg="StartContainer for \"301029fbd177a17158f0257aec25b25c07c910a264042863de8d323bba286a0a\"" Oct 2 20:40:12.381304 systemd[1]: Started cri-containerd-301029fbd177a17158f0257aec25b25c07c910a264042863de8d323bba286a0a.scope. Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.405283 kernel: kauditd_printk_skb: 113 callbacks suppressed Oct 2 20:40:12.405356 kernel: audit: type=1400 audit(1696279212.399:618): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2034 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.451439 kernel: audit: type=1300 audit(1696279212.399:618): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2034 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.451533 kernel: audit: type=1327 audit(1696279212.399:618): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313032396662643137376131373135386630323537616563323562 Oct 2 20:40:12.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313032396662643137376131373135386630323537616563323562 Oct 2 20:40:12.450804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1387248188.mount: Deactivated successfully. Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.492674 kernel: audit: type=1400 audit(1696279212.399:619): avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.510215 kernel: audit: type=1400 audit(1696279212.399:619): avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.526522 kernel: audit: type=1400 audit(1696279212.399:619): avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.543036 kernel: audit: type=1400 audit(1696279212.399:619): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.560328 kernel: audit: type=1400 audit(1696279212.399:619): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.560431 kernel: audit: type=1400 audit(1696279212.399:619): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.399000 audit: BPF prog-id=76 op=LOAD Oct 2 20:40:12.399000 audit[2109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2034 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313032396662643137376131373135386630323537616563323562 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.404000 audit: BPF prog-id=77 op=LOAD Oct 2 20:40:12.404000 audit[2109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2034 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313032396662643137376131373135386630323537616563323562 Oct 2 20:40:12.422000 audit: BPF prog-id=77 op=UNLOAD Oct 2 20:40:12.422000 audit: BPF prog-id=76 op=UNLOAD Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit[2109]: AVC avc: denied { bpf } for pid=2109 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.596010 kernel: audit: type=1400 audit(1696279212.399:619): avc: denied { perfmon } for pid=2109 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:40:12.422000 audit: BPF prog-id=78 op=LOAD Oct 2 20:40:12.422000 audit[2109]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2034 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.422000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330313032396662643137376131373135386630323537616563323562 Oct 2 20:40:12.598075 env[1383]: time="2023-10-02T20:40:12.598026217Z" level=info msg="StartContainer for \"301029fbd177a17158f0257aec25b25c07c910a264042863de8d323bba286a0a\" returns successfully" Oct 2 20:40:12.709072 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 20:40:12.709228 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 20:40:12.709258 kernel: IPVS: ipvs loaded. Oct 2 20:40:12.724004 kernel: IPVS: [rr] scheduler registered. Oct 2 20:40:12.771016 kernel: IPVS: [wrr] scheduler registered. Oct 2 20:40:12.781013 kernel: IPVS: [sh] scheduler registered. Oct 2 20:40:12.864000 audit[2170]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.864000 audit[2170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff192ac10 a2=0 a3=ffffb09de6c0 items=0 ppid=2121 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.864000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:40:12.866000 audit[2173]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.866000 audit[2173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4062180 a2=0 a3=ffff94aa46c0 items=0 ppid=2121 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.866000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:40:12.868000 audit[2172]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:12.868000 audit[2172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6c430e0 a2=0 a3=ffff95edc6c0 items=0 ppid=2121 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:40:12.870000 audit[2174]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.870000 audit[2174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9d21a40 a2=0 a3=ffffb7ad16c0 items=0 ppid=2121 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:40:12.872000 audit[2175]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=2175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:12.872000 audit[2175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe51a5480 a2=0 a3=ffff8a6d86c0 items=0 ppid=2121 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:40:12.874000 audit[2176]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:12.874000 audit[2176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeec6d1e0 a2=0 a3=ffff9d90a6c0 items=0 ppid=2121 pid=2176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.874000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:40:12.888692 kubelet[1940]: E1002 20:40:12.888654 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:12.902003 kubelet[1940]: E1002 20:40:12.901977 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:12.965802 kubelet[1940]: E1002 20:40:12.965769 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:12.969000 audit[2177]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.969000 audit[2177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdde96580 a2=0 a3=ffffa1efc6c0 items=0 ppid=2121 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.969000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:40:12.974000 audit[2179]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.974000 audit[2179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffefa2dac0 a2=0 a3=ffffa26f66c0 items=0 ppid=2121 pid=2179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 20:40:12.978000 audit[2182]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=2182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.978000 audit[2182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff15fda80 a2=0 a3=ffffa218b6c0 items=0 ppid=2121 pid=2182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.978000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 20:40:12.979000 audit[2183]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=2183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.979000 audit[2183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc52cca0 a2=0 a3=ffffadd3a6c0 items=0 ppid=2121 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:40:12.983000 audit[2185]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.983000 audit[2185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe311b850 a2=0 a3=ffffaf6636c0 items=0 ppid=2121 pid=2185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.983000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:40:12.984000 audit[2186]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.984000 audit[2186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffceeed780 a2=0 a3=ffffbc6076c0 items=0 ppid=2121 pid=2186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.984000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:40:12.988000 audit[2188]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.988000 audit[2188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff013a550 a2=0 a3=ffff95d456c0 items=0 ppid=2121 pid=2188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:40:12.993000 audit[2191]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=2191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.993000 audit[2191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc5e5afd0 a2=0 a3=ffff9f12b6c0 items=0 ppid=2121 pid=2191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 20:40:12.994000 audit[2192]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.994000 audit[2192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3d96df0 a2=0 a3=ffff8f1556c0 items=0 ppid=2121 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.994000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:40:12.997000 audit[2194]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.997000 audit[2194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffdc09e80 a2=0 a3=ffff91b736c0 items=0 ppid=2121 pid=2194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.997000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:40:12.999000 audit[2195]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:12.999000 audit[2195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc1ace90 a2=0 a3=ffff8e4a56c0 items=0 ppid=2121 pid=2195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:12.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:40:13.002000 audit[2197]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:13.002000 audit[2197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd33f310 a2=0 a3=ffffa59dc6c0 items=0 ppid=2121 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:40:13.007000 audit[2200]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:13.007000 audit[2200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe9975850 a2=0 a3=ffff829f76c0 items=0 ppid=2121 pid=2200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.007000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:40:13.012000 audit[2203]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:13.012000 audit[2203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff764ee20 a2=0 a3=ffff91e966c0 items=0 ppid=2121 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:40:13.013000 audit[2204]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=2204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:13.013000 audit[2204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffce6ee630 a2=0 a3=ffffb54e06c0 items=0 ppid=2121 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:40:13.017000 audit[2206]: NETFILTER_CFG table=nat:60 family=2 entries=2 op=nft_register_chain pid=2206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:13.017000 audit[2206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd8f43a90 a2=0 a3=ffff9bba86c0 items=0 ppid=2121 pid=2206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:40:13.022000 audit[2209]: NETFILTER_CFG table=nat:61 family=2 entries=2 op=nft_register_chain pid=2209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:40:13.022000 audit[2209]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff29f5a80 a2=0 a3=ffffa1c8c6c0 items=0 ppid=2121 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.022000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:40:13.048000 audit[2213]: NETFILTER_CFG table=filter:62 family=2 entries=6 op=nft_register_rule pid=2213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:40:13.048000 audit[2213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffec521e10 a2=0 a3=ffff95bd16c0 items=0 ppid=2121 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:40:13.065000 audit[2213]: NETFILTER_CFG table=nat:63 family=2 entries=17 op=nft_register_chain pid=2213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:40:13.065000 audit[2213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffec521e10 a2=0 a3=ffff95bd16c0 items=0 ppid=2121 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:40:13.069000 audit[2217]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.069000 audit[2217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdd324af0 a2=0 a3=ffffb6eb66c0 items=0 ppid=2121 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.069000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:40:13.074000 audit[2219]: NETFILTER_CFG table=filter:65 family=10 entries=2 op=nft_register_chain pid=2219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.074000 audit[2219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffeb0c9240 a2=0 a3=ffff970d26c0 items=0 ppid=2121 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.074000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 20:40:13.079000 audit[2222]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.079000 audit[2222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc58052c0 a2=0 a3=ffffb10236c0 items=0 ppid=2121 pid=2222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 20:40:13.081000 audit[2223]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.081000 audit[2223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd52c5ad0 a2=0 a3=ffff958916c0 items=0 ppid=2121 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.081000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:40:13.085000 audit[2225]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_rule pid=2225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.085000 audit[2225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc704e590 a2=0 a3=ffffac1816c0 items=0 ppid=2121 pid=2225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:40:13.086000 audit[2226]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_chain pid=2226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.086000 audit[2226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7fbb6e0 a2=0 a3=ffff965496c0 items=0 ppid=2121 pid=2226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:40:13.090000 audit[2228]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_rule pid=2228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.090000 audit[2228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe7e7d460 a2=0 a3=ffff7fe276c0 items=0 ppid=2121 pid=2228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 20:40:13.096000 audit[2231]: NETFILTER_CFG table=filter:71 family=10 entries=2 op=nft_register_chain pid=2231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.096000 audit[2231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe171ed50 a2=0 a3=ffff849436c0 items=0 ppid=2121 pid=2231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:40:13.099000 audit[2232]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=2232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.099000 audit[2232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4c3e700 a2=0 a3=ffffaced86c0 items=0 ppid=2121 pid=2232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:40:13.103000 audit[2234]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=2234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.103000 audit[2234]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffa1c9890 a2=0 a3=ffffb780a6c0 items=0 ppid=2121 pid=2234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.103000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:40:13.104000 audit[2235]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_chain pid=2235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.104000 audit[2235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffebe001e0 a2=0 a3=ffff9c40a6c0 items=0 ppid=2121 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:40:13.108000 audit[2237]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=2237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.108000 audit[2237]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4559000 a2=0 a3=ffffb51806c0 items=0 ppid=2121 pid=2237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:40:13.113000 audit[2240]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.113000 audit[2240]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd1ebf1a0 a2=0 a3=ffffb0d816c0 items=0 ppid=2121 pid=2240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:40:13.118000 audit[2243]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.118000 audit[2243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2ef3d00 a2=0 a3=ffffa7c386c0 items=0 ppid=2121 pid=2243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.118000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 20:40:13.120000 audit[2244]: NETFILTER_CFG table=nat:78 family=10 entries=1 op=nft_register_chain pid=2244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.120000 audit[2244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdbd69d70 a2=0 a3=ffffad0896c0 items=0 ppid=2121 pid=2244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:40:13.123000 audit[2246]: NETFILTER_CFG table=nat:79 family=10 entries=2 op=nft_register_chain pid=2246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.123000 audit[2246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe84df220 a2=0 a3=ffff9639b6c0 items=0 ppid=2121 pid=2246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:40:13.129000 audit[2249]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:40:13.129000 audit[2249]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffdee2a230 a2=0 a3=ffffa741a6c0 items=0 ppid=2121 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.129000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:40:13.137000 audit[2253]: NETFILTER_CFG table=filter:81 family=10 entries=3 op=nft_register_rule pid=2253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:40:13.137000 audit[2253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe7ce8320 a2=0 a3=ffffb77a66c0 items=0 ppid=2121 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.137000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:40:13.138000 audit[2253]: NETFILTER_CFG table=nat:82 family=10 entries=10 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:40:13.138000 audit[2253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=ffffe7ce8320 a2=0 a3=ffffb77a66c0 items=0 ppid=2121 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:40:13.138000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:40:13.903026 kubelet[1940]: E1002 20:40:13.902975 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:14.903594 kubelet[1940]: E1002 20:40:14.903543 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:15.904477 kubelet[1940]: E1002 20:40:15.904428 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:16.905240 kubelet[1940]: E1002 20:40:16.905183 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:17.092287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199204170.mount: Deactivated successfully. Oct 2 20:40:17.905735 kubelet[1940]: E1002 20:40:17.905693 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:17.966513 kubelet[1940]: E1002 20:40:17.966483 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:18.906171 kubelet[1940]: E1002 20:40:18.906129 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:19.218244 env[1383]: time="2023-10-02T20:40:19.218117012Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:19.225302 env[1383]: time="2023-10-02T20:40:19.225265639Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:19.231373 env[1383]: time="2023-10-02T20:40:19.231345222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:40:19.231816 env[1383]: time="2023-10-02T20:40:19.231785174Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 20:40:19.233821 env[1383]: time="2023-10-02T20:40:19.233794517Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:40:19.264850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537945605.mount: Deactivated successfully. Oct 2 20:40:19.268656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524655181.mount: Deactivated successfully. Oct 2 20:40:19.297169 env[1383]: time="2023-10-02T20:40:19.297095152Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\"" Oct 2 20:40:19.297883 env[1383]: time="2023-10-02T20:40:19.297845751Z" level=info msg="StartContainer for \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\"" Oct 2 20:40:19.318177 systemd[1]: Started cri-containerd-bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59.scope. Oct 2 20:40:19.332372 systemd[1]: cri-containerd-bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59.scope: Deactivated successfully. Oct 2 20:40:19.906802 kubelet[1940]: E1002 20:40:19.906757 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:20.263233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59-rootfs.mount: Deactivated successfully. Oct 2 20:40:20.907312 kubelet[1940]: E1002 20:40:20.907270 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:21.324749 env[1383]: time="2023-10-02T20:40:21.324487345Z" level=error msg="get state for bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59" error="context deadline exceeded: unknown" Oct 2 20:40:21.325125 env[1383]: time="2023-10-02T20:40:21.325092003Z" level=warning msg="unknown status" status=0 Oct 2 20:40:21.332340 env[1383]: time="2023-10-02T20:40:21.332267385Z" level=info msg="shim disconnected" id=bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59 Oct 2 20:40:21.332340 env[1383]: time="2023-10-02T20:40:21.332336538Z" level=warning msg="cleaning up after shim disconnected" id=bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59 namespace=k8s.io Oct 2 20:40:21.332473 env[1383]: time="2023-10-02T20:40:21.332345417Z" level=info msg="cleaning up dead shim" Oct 2 20:40:21.344670 env[1383]: time="2023-10-02T20:40:21.344628593Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:40:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2279 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:40:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:40:21.344963 env[1383]: time="2023-10-02T20:40:21.344874447Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 20:40:21.346073 env[1383]: time="2023-10-02T20:40:21.346037328Z" level=error msg="Failed to pipe stdout of container \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\"" error="reading from a closed fifo" Oct 2 20:40:21.346203 env[1383]: time="2023-10-02T20:40:21.346159795Z" level=error msg="Failed to pipe stderr of container \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\"" error="reading from a closed fifo" Oct 2 20:40:21.354671 env[1383]: time="2023-10-02T20:40:21.354629964Z" level=error msg="StartContainer for \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:40:21.355297 kubelet[1940]: E1002 20:40:21.354978 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59" Oct 2 20:40:21.355297 kubelet[1940]: E1002 20:40:21.355121 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:40:21.355297 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:40:21.355297 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:40:21.355466 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nm5zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:40:21.355581 kubelet[1940]: E1002 20:40:21.355172 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:40:21.907677 kubelet[1940]: E1002 20:40:21.907641 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:22.143059 env[1383]: time="2023-10-02T20:40:22.143021295Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:40:22.177482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533778023.mount: Deactivated successfully. Oct 2 20:40:22.194066 env[1383]: time="2023-10-02T20:40:22.193978700Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\"" Oct 2 20:40:22.194863 env[1383]: time="2023-10-02T20:40:22.194837693Z" level=info msg="StartContainer for \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\"" Oct 2 20:40:22.215513 systemd[1]: Started cri-containerd-fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde.scope. Oct 2 20:40:22.231141 systemd[1]: cri-containerd-fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde.scope: Deactivated successfully. Oct 2 20:40:22.251829 env[1383]: time="2023-10-02T20:40:22.251773538Z" level=info msg="shim disconnected" id=fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde Oct 2 20:40:22.251829 env[1383]: time="2023-10-02T20:40:22.251826972Z" level=warning msg="cleaning up after shim disconnected" id=fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde namespace=k8s.io Oct 2 20:40:22.252031 env[1383]: time="2023-10-02T20:40:22.251838811Z" level=info msg="cleaning up dead shim" Oct 2 20:40:22.264010 env[1383]: time="2023-10-02T20:40:22.263945236Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:40:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2317 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:40:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:40:22.264277 env[1383]: time="2023-10-02T20:40:22.264212009Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 20:40:22.268079 env[1383]: time="2023-10-02T20:40:22.268034745Z" level=error msg="Failed to pipe stderr of container \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\"" error="reading from a closed fifo" Oct 2 20:40:22.268165 env[1383]: time="2023-10-02T20:40:22.268048344Z" level=error msg="Failed to pipe stdout of container \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\"" error="reading from a closed fifo" Oct 2 20:40:22.273169 env[1383]: time="2023-10-02T20:40:22.273121595Z" level=error msg="StartContainer for \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:40:22.273355 kubelet[1940]: E1002 20:40:22.273332 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde" Oct 2 20:40:22.273771 kubelet[1940]: E1002 20:40:22.273441 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:40:22.273771 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:40:22.273771 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:40:22.273771 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nm5zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:40:22.274032 kubelet[1940]: E1002 20:40:22.273480 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:40:22.908677 kubelet[1940]: E1002 20:40:22.908646 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:22.966963 kubelet[1940]: E1002 20:40:22.966928 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:23.143481 kubelet[1940]: I1002 20:40:23.143459 1940 scope.go:115] "RemoveContainer" containerID="bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59" Oct 2 20:40:23.143953 kubelet[1940]: I1002 20:40:23.143938 1940 scope.go:115] "RemoveContainer" containerID="bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59" Oct 2 20:40:23.145258 env[1383]: time="2023-10-02T20:40:23.145206581Z" level=info msg="RemoveContainer for \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\"" Oct 2 20:40:23.145636 env[1383]: time="2023-10-02T20:40:23.145606662Z" level=info msg="RemoveContainer for \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\"" Oct 2 20:40:23.145720 env[1383]: time="2023-10-02T20:40:23.145689453Z" level=error msg="RemoveContainer for \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\" failed" error="failed to set removing state for container \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\": container is already in removing state" Oct 2 20:40:23.145842 kubelet[1940]: E1002 20:40:23.145830 1940 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\": container is already in removing state" containerID="bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59" Oct 2 20:40:23.145953 kubelet[1940]: E1002 20:40:23.145941 1940 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59": container is already in removing state; Skipping pod "cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)" Oct 2 20:40:23.146444 kubelet[1940]: E1002 20:40:23.146428 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:40:23.155924 env[1383]: time="2023-10-02T20:40:23.155884455Z" level=info msg="RemoveContainer for \"bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59\" returns successfully" Oct 2 20:40:23.172551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde-rootfs.mount: Deactivated successfully. Oct 2 20:40:23.908835 kubelet[1940]: E1002 20:40:23.908784 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:24.145860 kubelet[1940]: E1002 20:40:24.145835 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:40:24.431951 kubelet[1940]: W1002 20:40:24.431896 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14517ab4_0630_4cc4_b4f6_d38d30945409.slice/cri-containerd-bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59.scope WatchSource:0}: container "bb43586c02dcd98bdae17ea993cbeb2e09adb15f7259a38d9ddbce864e2c9d59" in namespace "k8s.io": not found Oct 2 20:40:24.909526 kubelet[1940]: E1002 20:40:24.909473 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:25.910787 kubelet[1940]: E1002 20:40:25.910739 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:26.911328 kubelet[1940]: E1002 20:40:26.911289 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:27.538732 kubelet[1940]: W1002 20:40:27.538690 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14517ab4_0630_4cc4_b4f6_d38d30945409.slice/cri-containerd-fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde.scope WatchSource:0}: task fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde not found: not found Oct 2 20:40:27.911427 kubelet[1940]: E1002 20:40:27.911394 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:27.968216 kubelet[1940]: E1002 20:40:27.968199 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:28.912908 kubelet[1940]: E1002 20:40:28.912874 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:29.913728 kubelet[1940]: E1002 20:40:29.913699 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:30.914406 kubelet[1940]: E1002 20:40:30.914359 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:31.914907 kubelet[1940]: E1002 20:40:31.914878 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:32.887885 kubelet[1940]: E1002 20:40:32.887850 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:32.916292 kubelet[1940]: E1002 20:40:32.916265 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:32.969075 kubelet[1940]: E1002 20:40:32.969060 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:33.916683 kubelet[1940]: E1002 20:40:33.916649 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:34.917530 kubelet[1940]: E1002 20:40:34.917495 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:35.917822 kubelet[1940]: E1002 20:40:35.917793 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:36.918263 kubelet[1940]: E1002 20:40:36.918228 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:37.101316 env[1383]: time="2023-10-02T20:40:37.100169533Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:40:37.123425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4142364569.mount: Deactivated successfully. Oct 2 20:40:37.128698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491723824.mount: Deactivated successfully. Oct 2 20:40:37.145940 env[1383]: time="2023-10-02T20:40:37.145854249Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\"" Oct 2 20:40:37.146266 env[1383]: time="2023-10-02T20:40:37.146239102Z" level=info msg="StartContainer for \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\"" Oct 2 20:40:37.167557 systemd[1]: Started cri-containerd-6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab.scope. Oct 2 20:40:37.180750 systemd[1]: cri-containerd-6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab.scope: Deactivated successfully. Oct 2 20:40:37.256686 env[1383]: time="2023-10-02T20:40:37.256639023Z" level=info msg="shim disconnected" id=6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab Oct 2 20:40:37.256924 env[1383]: time="2023-10-02T20:40:37.256906284Z" level=warning msg="cleaning up after shim disconnected" id=6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab namespace=k8s.io Oct 2 20:40:37.257016 env[1383]: time="2023-10-02T20:40:37.256980279Z" level=info msg="cleaning up dead shim" Oct 2 20:40:37.268917 env[1383]: time="2023-10-02T20:40:37.268879274Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:40:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2356 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:40:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:40:37.269318 env[1383]: time="2023-10-02T20:40:37.269267526Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 20:40:37.269746 env[1383]: time="2023-10-02T20:40:37.269543746Z" level=error msg="Failed to pipe stdout of container \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\"" error="reading from a closed fifo" Oct 2 20:40:37.269864 env[1383]: time="2023-10-02T20:40:37.269544506Z" level=error msg="Failed to pipe stderr of container \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\"" error="reading from a closed fifo" Oct 2 20:40:37.314374 env[1383]: time="2023-10-02T20:40:37.314326727Z" level=error msg="StartContainer for \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:40:37.315017 kubelet[1940]: E1002 20:40:37.314636 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab" Oct 2 20:40:37.315017 kubelet[1940]: E1002 20:40:37.314736 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:40:37.315017 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:40:37.315017 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:40:37.315223 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nm5zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:40:37.315280 kubelet[1940]: E1002 20:40:37.314769 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:40:37.919272 kubelet[1940]: E1002 20:40:37.919242 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:37.970602 kubelet[1940]: E1002 20:40:37.970579 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:38.121737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab-rootfs.mount: Deactivated successfully. Oct 2 20:40:38.169848 kubelet[1940]: I1002 20:40:38.169497 1940 scope.go:115] "RemoveContainer" containerID="fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde" Oct 2 20:40:38.169848 kubelet[1940]: I1002 20:40:38.169815 1940 scope.go:115] "RemoveContainer" containerID="fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde" Oct 2 20:40:38.171126 env[1383]: time="2023-10-02T20:40:38.171091011Z" level=info msg="RemoveContainer for \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\"" Oct 2 20:40:38.171885 env[1383]: time="2023-10-02T20:40:38.171863437Z" level=info msg="RemoveContainer for \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\"" Oct 2 20:40:38.172205 env[1383]: time="2023-10-02T20:40:38.172059063Z" level=error msg="RemoveContainer for \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\" failed" error="failed to set removing state for container \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\": container is already in removing state" Oct 2 20:40:38.172946 kubelet[1940]: E1002 20:40:38.172446 1940 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\": container is already in removing state" containerID="fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde" Oct 2 20:40:38.172946 kubelet[1940]: E1002 20:40:38.172472 1940 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde": container is already in removing state; Skipping pod "cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)" Oct 2 20:40:38.172946 kubelet[1940]: E1002 20:40:38.172694 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:40:38.184231 env[1383]: time="2023-10-02T20:40:38.184197460Z" level=info msg="RemoveContainer for \"fcbf12c9dc68aa8c9b63673172ec14e4c2c0ddac8decf211ccebfd9c12ef0dde\" returns successfully" Oct 2 20:40:38.919530 kubelet[1940]: E1002 20:40:38.919490 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:39.920483 kubelet[1940]: E1002 20:40:39.920444 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:40.361344 kubelet[1940]: W1002 20:40:40.361044 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14517ab4_0630_4cc4_b4f6_d38d30945409.slice/cri-containerd-6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab.scope WatchSource:0}: task 6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab not found: not found Oct 2 20:40:40.921295 kubelet[1940]: E1002 20:40:40.921258 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:41.922088 kubelet[1940]: E1002 20:40:41.922054 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:42.923066 kubelet[1940]: E1002 20:40:42.923035 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:42.971149 kubelet[1940]: E1002 20:40:42.971124 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:43.924068 kubelet[1940]: E1002 20:40:43.924040 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:44.925342 kubelet[1940]: E1002 20:40:44.925301 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:45.926109 kubelet[1940]: E1002 20:40:45.926077 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:46.926728 kubelet[1940]: E1002 20:40:46.926690 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:47.927446 kubelet[1940]: E1002 20:40:47.927410 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:47.971965 kubelet[1940]: E1002 20:40:47.971951 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:48.928556 kubelet[1940]: E1002 20:40:48.928526 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:49.929401 kubelet[1940]: E1002 20:40:49.929366 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:50.097745 kubelet[1940]: E1002 20:40:50.097715 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:40:50.929975 kubelet[1940]: E1002 20:40:50.929939 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:51.930115 kubelet[1940]: E1002 20:40:51.930062 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:52.888542 kubelet[1940]: E1002 20:40:52.888500 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:52.931182 kubelet[1940]: E1002 20:40:52.931160 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:52.972582 kubelet[1940]: E1002 20:40:52.972559 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:53.932438 kubelet[1940]: E1002 20:40:53.932409 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:54.933116 kubelet[1940]: E1002 20:40:54.933079 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:55.933612 kubelet[1940]: E1002 20:40:55.933578 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:56.934267 kubelet[1940]: E1002 20:40:56.934234 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:57.934604 kubelet[1940]: E1002 20:40:57.934578 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:57.973161 kubelet[1940]: E1002 20:40:57.973147 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:40:58.935588 kubelet[1940]: E1002 20:40:58.935552 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:40:59.936428 kubelet[1940]: E1002 20:40:59.936385 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:00.936917 kubelet[1940]: E1002 20:41:00.936890 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:01.100194 env[1383]: time="2023-10-02T20:41:01.100136181Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:41:01.131707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797531592.mount: Deactivated successfully. Oct 2 20:41:01.136117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775046651.mount: Deactivated successfully. Oct 2 20:41:01.148564 env[1383]: time="2023-10-02T20:41:01.148479190Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\"" Oct 2 20:41:01.149195 env[1383]: time="2023-10-02T20:41:01.149157879Z" level=info msg="StartContainer for \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\"" Oct 2 20:41:01.170273 systemd[1]: Started cri-containerd-1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca.scope. Oct 2 20:41:01.185334 systemd[1]: cri-containerd-1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca.scope: Deactivated successfully. Oct 2 20:41:01.220827 env[1383]: time="2023-10-02T20:41:01.220304014Z" level=info msg="shim disconnected" id=1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca Oct 2 20:41:01.220827 env[1383]: time="2023-10-02T20:41:01.220356292Z" level=warning msg="cleaning up after shim disconnected" id=1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca namespace=k8s.io Oct 2 20:41:01.220827 env[1383]: time="2023-10-02T20:41:01.220366532Z" level=info msg="cleaning up dead shim" Oct 2 20:41:01.232929 env[1383]: time="2023-10-02T20:41:01.232878405Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:41:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2398 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:41:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:41:01.233236 env[1383]: time="2023-10-02T20:41:01.233179951Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:41:01.233435 env[1383]: time="2023-10-02T20:41:01.233402141Z" level=error msg="Failed to pipe stdout of container \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\"" error="reading from a closed fifo" Oct 2 20:41:01.235141 env[1383]: time="2023-10-02T20:41:01.235036667Z" level=error msg="Failed to pipe stderr of container \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\"" error="reading from a closed fifo" Oct 2 20:41:01.239530 env[1383]: time="2023-10-02T20:41:01.239484905Z" level=error msg="StartContainer for \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:41:01.240144 kubelet[1940]: E1002 20:41:01.239673 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca" Oct 2 20:41:01.240144 kubelet[1940]: E1002 20:41:01.239789 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:41:01.240144 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:41:01.240144 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:41:01.240334 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nm5zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:41:01.240387 kubelet[1940]: E1002 20:41:01.239822 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:41:01.938397 kubelet[1940]: E1002 20:41:01.938353 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:02.129911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca-rootfs.mount: Deactivated successfully. Oct 2 20:41:02.207043 kubelet[1940]: I1002 20:41:02.206427 1940 scope.go:115] "RemoveContainer" containerID="6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab" Oct 2 20:41:02.207043 kubelet[1940]: I1002 20:41:02.206732 1940 scope.go:115] "RemoveContainer" containerID="6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab" Oct 2 20:41:02.208448 env[1383]: time="2023-10-02T20:41:02.208416532Z" level=info msg="RemoveContainer for \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\"" Oct 2 20:41:02.210437 env[1383]: time="2023-10-02T20:41:02.209125380Z" level=info msg="RemoveContainer for \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\"" Oct 2 20:41:02.210437 env[1383]: time="2023-10-02T20:41:02.209265854Z" level=error msg="RemoveContainer for \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\" failed" error="failed to set removing state for container \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\": container is already in removing state" Oct 2 20:41:02.210557 kubelet[1940]: E1002 20:41:02.209386 1940 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\": container is already in removing state" containerID="6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab" Oct 2 20:41:02.210557 kubelet[1940]: E1002 20:41:02.209421 1940 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab": container is already in removing state; Skipping pod "cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)" Oct 2 20:41:02.210557 kubelet[1940]: E1002 20:41:02.209677 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:41:02.218632 env[1383]: time="2023-10-02T20:41:02.218605997Z" level=info msg="RemoveContainer for \"6cbd17ce67a2b6d57fc2902187bb83e95f9ceaea15889fa8077028670c77e9ab\" returns successfully" Oct 2 20:41:02.938909 kubelet[1940]: E1002 20:41:02.938874 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:02.973716 kubelet[1940]: E1002 20:41:02.973688 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:03.939589 kubelet[1940]: E1002 20:41:03.939558 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:04.324852 kubelet[1940]: W1002 20:41:04.324537 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14517ab4_0630_4cc4_b4f6_d38d30945409.slice/cri-containerd-1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca.scope WatchSource:0}: task 1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca not found: not found Oct 2 20:41:04.940698 kubelet[1940]: E1002 20:41:04.940665 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:05.941312 kubelet[1940]: E1002 20:41:05.941277 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:06.941539 kubelet[1940]: E1002 20:41:06.941507 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:07.942118 kubelet[1940]: E1002 20:41:07.942088 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:07.975221 kubelet[1940]: E1002 20:41:07.975203 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:08.942919 kubelet[1940]: E1002 20:41:08.942891 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:09.944155 kubelet[1940]: E1002 20:41:09.944128 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:10.945588 kubelet[1940]: E1002 20:41:10.945560 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:11.946711 kubelet[1940]: E1002 20:41:11.946671 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:12.888649 kubelet[1940]: E1002 20:41:12.888611 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:12.947142 kubelet[1940]: E1002 20:41:12.947114 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:12.975860 kubelet[1940]: E1002 20:41:12.975837 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:13.947863 kubelet[1940]: E1002 20:41:13.947820 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:14.098185 kubelet[1940]: E1002 20:41:14.098037 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:41:14.947923 kubelet[1940]: E1002 20:41:14.947892 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:15.949377 kubelet[1940]: E1002 20:41:15.949341 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:16.949505 kubelet[1940]: E1002 20:41:16.949466 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:17.950394 kubelet[1940]: E1002 20:41:17.950357 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:17.977062 kubelet[1940]: E1002 20:41:17.977042 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:18.951410 kubelet[1940]: E1002 20:41:18.951379 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:19.952296 kubelet[1940]: E1002 20:41:19.952266 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:20.952977 kubelet[1940]: E1002 20:41:20.952934 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:21.953522 kubelet[1940]: E1002 20:41:21.953489 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:22.954028 kubelet[1940]: E1002 20:41:22.953981 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:22.977767 kubelet[1940]: E1002 20:41:22.977745 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:23.954658 kubelet[1940]: E1002 20:41:23.954620 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:24.955873 kubelet[1940]: E1002 20:41:24.955843 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:25.956772 kubelet[1940]: E1002 20:41:25.956741 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:26.957286 kubelet[1940]: E1002 20:41:26.957247 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:27.957815 kubelet[1940]: E1002 20:41:27.957766 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:27.978444 kubelet[1940]: E1002 20:41:27.978425 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:28.958879 kubelet[1940]: E1002 20:41:28.958852 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:29.098529 kubelet[1940]: E1002 20:41:29.098503 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:41:29.959970 kubelet[1940]: E1002 20:41:29.959938 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:30.960727 kubelet[1940]: E1002 20:41:30.960689 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:31.961497 kubelet[1940]: E1002 20:41:31.961461 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:32.888356 kubelet[1940]: E1002 20:41:32.888327 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:32.961566 kubelet[1940]: E1002 20:41:32.961546 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:32.979358 kubelet[1940]: E1002 20:41:32.979340 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:33.962573 kubelet[1940]: E1002 20:41:33.962539 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:34.962999 kubelet[1940]: E1002 20:41:34.962957 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:35.963953 kubelet[1940]: E1002 20:41:35.963919 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:36.964961 kubelet[1940]: E1002 20:41:36.964918 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:37.965361 kubelet[1940]: E1002 20:41:37.965330 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:37.980921 kubelet[1940]: E1002 20:41:37.980900 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:38.965701 kubelet[1940]: E1002 20:41:38.965672 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:39.966186 kubelet[1940]: E1002 20:41:39.966149 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:40.967025 kubelet[1940]: E1002 20:41:40.966972 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:41.967660 kubelet[1940]: E1002 20:41:41.967628 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:42.968736 kubelet[1940]: E1002 20:41:42.968657 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:42.981650 kubelet[1940]: E1002 20:41:42.981625 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:43.969705 kubelet[1940]: E1002 20:41:43.969675 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:44.100209 env[1383]: time="2023-10-02T20:41:44.100080907Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 20:41:44.130226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219582862.mount: Deactivated successfully. Oct 2 20:41:44.134938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806154324.mount: Deactivated successfully. Oct 2 20:41:44.151739 env[1383]: time="2023-10-02T20:41:44.151697432Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\"" Oct 2 20:41:44.152199 env[1383]: time="2023-10-02T20:41:44.152175938Z" level=info msg="StartContainer for \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\"" Oct 2 20:41:44.177727 systemd[1]: Started cri-containerd-980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9.scope. Oct 2 20:41:44.191030 systemd[1]: cri-containerd-980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9.scope: Deactivated successfully. Oct 2 20:41:44.213160 env[1383]: time="2023-10-02T20:41:44.213103997Z" level=info msg="shim disconnected" id=980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9 Oct 2 20:41:44.213406 env[1383]: time="2023-10-02T20:41:44.213377429Z" level=warning msg="cleaning up after shim disconnected" id=980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9 namespace=k8s.io Oct 2 20:41:44.213483 env[1383]: time="2023-10-02T20:41:44.213469866Z" level=info msg="cleaning up dead shim" Oct 2 20:41:44.224805 env[1383]: time="2023-10-02T20:41:44.224714985Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:41:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2439 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:41:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:41:44.225260 env[1383]: time="2023-10-02T20:41:44.225210771Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:41:44.226084 env[1383]: time="2023-10-02T20:41:44.226052706Z" level=error msg="Failed to pipe stdout of container \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\"" error="reading from a closed fifo" Oct 2 20:41:44.226214 env[1383]: time="2023-10-02T20:41:44.226174463Z" level=error msg="Failed to pipe stderr of container \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\"" error="reading from a closed fifo" Oct 2 20:41:44.230380 env[1383]: time="2023-10-02T20:41:44.230337904Z" level=error msg="StartContainer for \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:41:44.231010 kubelet[1940]: E1002 20:41:44.230548 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9" Oct 2 20:41:44.231010 kubelet[1940]: E1002 20:41:44.230637 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:41:44.231010 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:41:44.231010 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:41:44.231185 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nm5zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:41:44.231236 kubelet[1940]: E1002 20:41:44.230671 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:41:44.264186 kubelet[1940]: I1002 20:41:44.263737 1940 scope.go:115] "RemoveContainer" containerID="1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca" Oct 2 20:41:44.264186 kubelet[1940]: I1002 20:41:44.264036 1940 scope.go:115] "RemoveContainer" containerID="1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca" Oct 2 20:41:44.265515 env[1383]: time="2023-10-02T20:41:44.265401622Z" level=info msg="RemoveContainer for \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\"" Oct 2 20:41:44.265755 env[1383]: time="2023-10-02T20:41:44.265730132Z" level=info msg="RemoveContainer for \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\"" Oct 2 20:41:44.266031 env[1383]: time="2023-10-02T20:41:44.266000165Z" level=error msg="RemoveContainer for \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\" failed" error="failed to set removing state for container \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\": container is already in removing state" Oct 2 20:41:44.266302 kubelet[1940]: E1002 20:41:44.266241 1940 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\": container is already in removing state" containerID="1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca" Oct 2 20:41:44.266302 kubelet[1940]: I1002 20:41:44.266280 1940 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca} err="rpc error: code = Unknown desc = failed to set removing state for container \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\": container is already in removing state" Oct 2 20:41:44.277474 env[1383]: time="2023-10-02T20:41:44.277172845Z" level=info msg="RemoveContainer for \"1554082acc4a86f79665dc73981d365821feafb5a14eabef2ff174c1f05e43ca\" returns successfully" Oct 2 20:41:44.277643 kubelet[1940]: E1002 20:41:44.277557 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:41:44.971025 kubelet[1940]: E1002 20:41:44.971000 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:45.128587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9-rootfs.mount: Deactivated successfully. Oct 2 20:41:45.972159 kubelet[1940]: E1002 20:41:45.972124 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:46.973179 kubelet[1940]: E1002 20:41:46.973138 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:47.316867 kubelet[1940]: W1002 20:41:47.316562 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14517ab4_0630_4cc4_b4f6_d38d30945409.slice/cri-containerd-980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9.scope WatchSource:0}: task 980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9 not found: not found Oct 2 20:41:47.973950 kubelet[1940]: E1002 20:41:47.973911 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:47.982556 kubelet[1940]: E1002 20:41:47.982536 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:48.974400 kubelet[1940]: E1002 20:41:48.974366 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:49.974732 kubelet[1940]: E1002 20:41:49.974698 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:50.974857 kubelet[1940]: E1002 20:41:50.974812 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:51.975488 kubelet[1940]: E1002 20:41:51.975451 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:52.888273 kubelet[1940]: E1002 20:41:52.888243 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:52.975861 kubelet[1940]: E1002 20:41:52.975839 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:52.983401 kubelet[1940]: E1002 20:41:52.983381 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:53.977205 kubelet[1940]: E1002 20:41:53.977172 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:54.978491 kubelet[1940]: E1002 20:41:54.978458 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:55.098875 kubelet[1940]: E1002 20:41:55.098848 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:41:55.979477 kubelet[1940]: E1002 20:41:55.979442 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:56.979574 kubelet[1940]: E1002 20:41:56.979535 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:57.980318 kubelet[1940]: E1002 20:41:57.980288 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:57.984815 kubelet[1940]: E1002 20:41:57.984799 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:41:58.980884 kubelet[1940]: E1002 20:41:58.980856 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:41:59.982062 kubelet[1940]: E1002 20:41:59.982030 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:00.982697 kubelet[1940]: E1002 20:42:00.982668 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:01.984053 kubelet[1940]: E1002 20:42:01.984022 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:02.984467 kubelet[1940]: E1002 20:42:02.984438 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:02.985880 kubelet[1940]: E1002 20:42:02.985858 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:03.984749 kubelet[1940]: E1002 20:42:03.984701 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:04.985098 kubelet[1940]: E1002 20:42:04.985064 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:05.985338 kubelet[1940]: E1002 20:42:05.985301 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:06.986246 kubelet[1940]: E1002 20:42:06.986216 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:07.986799 kubelet[1940]: E1002 20:42:07.986762 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:07.987192 kubelet[1940]: E1002 20:42:07.987176 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:08.098083 kubelet[1940]: E1002 20:42:08.098058 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:42:08.987072 kubelet[1940]: E1002 20:42:08.987036 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:09.988136 kubelet[1940]: E1002 20:42:09.988088 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:10.988806 kubelet[1940]: E1002 20:42:10.988750 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:11.989666 kubelet[1940]: E1002 20:42:11.989634 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:12.888796 kubelet[1940]: E1002 20:42:12.888759 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:12.988447 kubelet[1940]: E1002 20:42:12.988425 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:12.990776 kubelet[1940]: E1002 20:42:12.990760 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:13.991784 kubelet[1940]: E1002 20:42:13.991755 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:14.992841 kubelet[1940]: E1002 20:42:14.992798 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:15.993535 kubelet[1940]: E1002 20:42:15.993499 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:16.994302 kubelet[1940]: E1002 20:42:16.994269 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:17.989323 kubelet[1940]: E1002 20:42:17.989299 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:17.994416 kubelet[1940]: E1002 20:42:17.994403 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:18.994999 kubelet[1940]: E1002 20:42:18.994956 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:19.098598 kubelet[1940]: E1002 20:42:19.098573 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:42:19.995443 kubelet[1940]: E1002 20:42:19.995414 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:20.996148 kubelet[1940]: E1002 20:42:20.996119 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:21.996976 kubelet[1940]: E1002 20:42:21.996942 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:22.990645 kubelet[1940]: E1002 20:42:22.990615 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:22.998162 kubelet[1940]: E1002 20:42:22.998133 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:23.998632 kubelet[1940]: E1002 20:42:23.998595 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:24.999494 kubelet[1940]: E1002 20:42:24.999459 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:26.000284 kubelet[1940]: E1002 20:42:26.000251 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:27.001108 kubelet[1940]: E1002 20:42:27.001077 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:27.991582 kubelet[1940]: E1002 20:42:27.991560 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:28.001946 kubelet[1940]: E1002 20:42:28.001935 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:29.002350 kubelet[1940]: E1002 20:42:29.002296 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:30.002884 kubelet[1940]: E1002 20:42:30.002853 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:31.003932 kubelet[1940]: E1002 20:42:31.003898 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:32.004425 kubelet[1940]: E1002 20:42:32.004388 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:32.097869 kubelet[1940]: E1002 20:42:32.097844 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:42:32.888608 kubelet[1940]: E1002 20:42:32.888567 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:32.992301 kubelet[1940]: E1002 20:42:32.992267 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:33.005545 kubelet[1940]: E1002 20:42:33.005526 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:34.006260 kubelet[1940]: E1002 20:42:34.006228 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:35.007023 kubelet[1940]: E1002 20:42:35.006979 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:36.007748 kubelet[1940]: E1002 20:42:36.007713 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:37.008704 kubelet[1940]: E1002 20:42:37.008673 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:37.993273 kubelet[1940]: E1002 20:42:37.993180 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:38.009442 kubelet[1940]: E1002 20:42:38.009424 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:39.010703 kubelet[1940]: E1002 20:42:39.010672 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:40.011441 kubelet[1940]: E1002 20:42:40.011410 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:41.012159 kubelet[1940]: E1002 20:42:41.012118 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:42.013231 kubelet[1940]: E1002 20:42:42.013195 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:42.994630 kubelet[1940]: E1002 20:42:42.994605 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:43.014268 kubelet[1940]: E1002 20:42:43.014251 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:44.015618 kubelet[1940]: E1002 20:42:44.015589 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:45.016645 kubelet[1940]: E1002 20:42:45.016614 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:45.098632 kubelet[1940]: E1002 20:42:45.098599 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:42:46.017525 kubelet[1940]: E1002 20:42:46.017492 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:47.018383 kubelet[1940]: E1002 20:42:47.018343 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:47.995675 kubelet[1940]: E1002 20:42:47.995642 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:48.019200 kubelet[1940]: E1002 20:42:48.019169 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:49.020122 kubelet[1940]: E1002 20:42:49.020092 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:50.021025 kubelet[1940]: E1002 20:42:50.020981 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:51.022067 kubelet[1940]: E1002 20:42:51.022036 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:52.023555 kubelet[1940]: E1002 20:42:52.023524 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:52.887915 kubelet[1940]: E1002 20:42:52.887869 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:52.996053 kubelet[1940]: E1002 20:42:52.996023 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:53.024296 kubelet[1940]: E1002 20:42:53.024275 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:54.025288 kubelet[1940]: E1002 20:42:54.025259 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:55.025889 kubelet[1940]: E1002 20:42:55.025859 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:56.026772 kubelet[1940]: E1002 20:42:56.026739 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:57.027791 kubelet[1940]: E1002 20:42:57.027761 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:57.098920 kubelet[1940]: E1002 20:42:57.098892 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:42:57.996892 kubelet[1940]: E1002 20:42:57.996863 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:42:58.027967 kubelet[1940]: E1002 20:42:58.027952 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:42:59.028844 kubelet[1940]: E1002 20:42:59.028812 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:00.029890 kubelet[1940]: E1002 20:43:00.029850 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:01.030616 kubelet[1940]: E1002 20:43:01.030585 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:02.031225 kubelet[1940]: E1002 20:43:02.031190 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:02.998098 kubelet[1940]: E1002 20:43:02.998070 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:03.031473 kubelet[1940]: E1002 20:43:03.031459 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:04.031889 kubelet[1940]: E1002 20:43:04.031843 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:05.032268 kubelet[1940]: E1002 20:43:05.032235 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:06.032852 kubelet[1940]: E1002 20:43:06.032822 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:07.033550 kubelet[1940]: E1002 20:43:07.033518 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:07.999243 kubelet[1940]: E1002 20:43:07.999220 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:08.034509 kubelet[1940]: E1002 20:43:08.034490 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:09.035965 kubelet[1940]: E1002 20:43:09.035917 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:09.100852 env[1383]: time="2023-10-02T20:43:09.100721552Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 20:43:09.128471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263125893.mount: Deactivated successfully. Oct 2 20:43:09.132735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843080860.mount: Deactivated successfully. Oct 2 20:43:09.147954 env[1383]: time="2023-10-02T20:43:09.147889078Z" level=info msg="CreateContainer within sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d\"" Oct 2 20:43:09.148581 env[1383]: time="2023-10-02T20:43:09.148544541Z" level=info msg="StartContainer for \"36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d\"" Oct 2 20:43:09.168459 systemd[1]: Started cri-containerd-36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d.scope. Oct 2 20:43:09.182885 systemd[1]: cri-containerd-36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d.scope: Deactivated successfully. Oct 2 20:43:09.217559 env[1383]: time="2023-10-02T20:43:09.216852067Z" level=info msg="shim disconnected" id=36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d Oct 2 20:43:09.217559 env[1383]: time="2023-10-02T20:43:09.216906149Z" level=warning msg="cleaning up after shim disconnected" id=36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d namespace=k8s.io Oct 2 20:43:09.217559 env[1383]: time="2023-10-02T20:43:09.216917470Z" level=info msg="cleaning up dead shim" Oct 2 20:43:09.228036 env[1383]: time="2023-10-02T20:43:09.227963326Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:43:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2483 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:43:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:43:09.228295 env[1383]: time="2023-10-02T20:43:09.228235455Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:43:09.230079 env[1383]: time="2023-10-02T20:43:09.230041477Z" level=error msg="Failed to pipe stderr of container \"36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d\"" error="reading from a closed fifo" Oct 2 20:43:09.230214 env[1383]: time="2023-10-02T20:43:09.230178241Z" level=error msg="Failed to pipe stdout of container \"36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d\"" error="reading from a closed fifo" Oct 2 20:43:09.234856 env[1383]: time="2023-10-02T20:43:09.234811119Z" level=error msg="StartContainer for \"36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:43:09.235332 kubelet[1940]: E1002 20:43:09.235151 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d" Oct 2 20:43:09.235332 kubelet[1940]: E1002 20:43:09.235264 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:43:09.235332 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:43:09.235332 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:43:09.235509 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nm5zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:43:09.235564 kubelet[1940]: E1002 20:43:09.235309 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:43:09.377837 kubelet[1940]: I1002 20:43:09.377230 1940 scope.go:115] "RemoveContainer" containerID="980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9" Oct 2 20:43:09.377837 kubelet[1940]: I1002 20:43:09.377590 1940 scope.go:115] "RemoveContainer" containerID="980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9" Oct 2 20:43:09.379370 env[1383]: time="2023-10-02T20:43:09.379323321Z" level=info msg="RemoveContainer for \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\"" Oct 2 20:43:09.379516 env[1383]: time="2023-10-02T20:43:09.379495847Z" level=info msg="RemoveContainer for \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\"" Oct 2 20:43:09.379671 env[1383]: time="2023-10-02T20:43:09.379617931Z" level=error msg="RemoveContainer for \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\" failed" error="failed to set removing state for container \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\": container is already in removing state" Oct 2 20:43:09.381448 kubelet[1940]: E1002 20:43:09.381033 1940 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\": container is already in removing state" containerID="980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9" Oct 2 20:43:09.381448 kubelet[1940]: E1002 20:43:09.381059 1940 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9": container is already in removing state; Skipping pod "cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)" Oct 2 20:43:09.381448 kubelet[1940]: E1002 20:43:09.381321 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:43:09.393420 env[1383]: time="2023-10-02T20:43:09.393375720Z" level=info msg="RemoveContainer for \"980be3857993516e22c66bb2a68e799f306d7cf69a6d9f7011e4fd3a52288ae9\" returns successfully" Oct 2 20:43:10.036771 kubelet[1940]: E1002 20:43:10.036743 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:10.126623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d-rootfs.mount: Deactivated successfully. Oct 2 20:43:11.037813 kubelet[1940]: E1002 20:43:11.037777 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:12.038434 kubelet[1940]: E1002 20:43:12.038402 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:12.321538 kubelet[1940]: W1002 20:43:12.321242 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14517ab4_0630_4cc4_b4f6_d38d30945409.slice/cri-containerd-36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d.scope WatchSource:0}: task 36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d not found: not found Oct 2 20:43:12.888821 kubelet[1940]: E1002 20:43:12.888747 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:13.000475 kubelet[1940]: E1002 20:43:13.000447 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:13.039037 kubelet[1940]: E1002 20:43:13.039016 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:14.039305 kubelet[1940]: E1002 20:43:14.039270 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:15.039925 kubelet[1940]: E1002 20:43:15.039891 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:16.041299 kubelet[1940]: E1002 20:43:16.041272 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:17.042656 kubelet[1940]: E1002 20:43:17.042614 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:18.001408 kubelet[1940]: E1002 20:43:18.001383 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:18.043906 kubelet[1940]: E1002 20:43:18.043891 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:19.044728 kubelet[1940]: E1002 20:43:19.044697 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:20.045766 kubelet[1940]: E1002 20:43:20.045731 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:21.046300 kubelet[1940]: E1002 20:43:21.046271 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:21.099428 kubelet[1940]: E1002 20:43:21.099401 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-k4xvv_kube-system(14517ab4-0630-4cc4-b4f6-d38d30945409)\"" pod="kube-system/cilium-k4xvv" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 Oct 2 20:43:22.047193 kubelet[1940]: E1002 20:43:22.047163 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:23.002253 kubelet[1940]: E1002 20:43:23.002199 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:23.048441 kubelet[1940]: E1002 20:43:23.048422 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:24.048882 kubelet[1940]: E1002 20:43:24.048854 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:25.049618 kubelet[1940]: E1002 20:43:25.049591 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:26.050897 kubelet[1940]: E1002 20:43:26.050857 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:27.051896 kubelet[1940]: E1002 20:43:27.051863 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:27.547514 env[1383]: time="2023-10-02T20:43:27.547297616Z" level=info msg="StopPodSandbox for \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\"" Oct 2 20:43:27.547514 env[1383]: time="2023-10-02T20:43:27.547359698Z" level=info msg="Container to stop \"36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:43:27.548938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be-shm.mount: Deactivated successfully. Oct 2 20:43:27.567228 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 20:43:27.567332 kernel: audit: type=1334 audit(1696279407.556:668): prog-id=72 op=UNLOAD Oct 2 20:43:27.556000 audit: BPF prog-id=72 op=UNLOAD Oct 2 20:43:27.556807 systemd[1]: cri-containerd-2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be.scope: Deactivated successfully. Oct 2 20:43:27.568000 audit: BPF prog-id=75 op=UNLOAD Oct 2 20:43:27.577012 kernel: audit: type=1334 audit(1696279407.568:669): prog-id=75 op=UNLOAD Oct 2 20:43:27.591797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be-rootfs.mount: Deactivated successfully. Oct 2 20:43:27.625699 env[1383]: time="2023-10-02T20:43:27.625655232Z" level=info msg="shim disconnected" id=2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be Oct 2 20:43:27.626053 env[1383]: time="2023-10-02T20:43:27.626029441Z" level=warning msg="cleaning up after shim disconnected" id=2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be namespace=k8s.io Oct 2 20:43:27.626150 env[1383]: time="2023-10-02T20:43:27.626136443Z" level=info msg="cleaning up dead shim" Oct 2 20:43:27.638192 env[1383]: time="2023-10-02T20:43:27.638161728Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:43:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2516 runtime=io.containerd.runc.v2\n" Oct 2 20:43:27.638579 env[1383]: time="2023-10-02T20:43:27.638554137Z" level=info msg="TearDown network for sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" successfully" Oct 2 20:43:27.638677 env[1383]: time="2023-10-02T20:43:27.638660460Z" level=info msg="StopPodSandbox for \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" returns successfully" Oct 2 20:43:27.700094 kubelet[1940]: I1002 20:43:27.700067 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-lib-modules\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700228 kubelet[1940]: I1002 20:43:27.700103 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cni-path\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700228 kubelet[1940]: I1002 20:43:27.700121 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-bpf-maps\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700228 kubelet[1940]: I1002 20:43:27.700138 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-host-proc-sys-net\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700228 kubelet[1940]: I1002 20:43:27.700157 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-xtables-lock\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700228 kubelet[1940]: I1002 20:43:27.700184 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-config-path\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700228 kubelet[1940]: I1002 20:43:27.700209 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14517ab4-0630-4cc4-b4f6-d38d30945409-hubble-tls\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700390 kubelet[1940]: I1002 20:43:27.700230 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-cgroup\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700390 kubelet[1940]: I1002 20:43:27.700248 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-host-proc-sys-kernel\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700390 kubelet[1940]: I1002 20:43:27.700266 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-etc-cni-netd\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700390 kubelet[1940]: I1002 20:43:27.700285 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14517ab4-0630-4cc4-b4f6-d38d30945409-clustermesh-secrets\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700390 kubelet[1940]: I1002 20:43:27.700305 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm5zs\" (UniqueName: \"kubernetes.io/projected/14517ab4-0630-4cc4-b4f6-d38d30945409-kube-api-access-nm5zs\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700390 kubelet[1940]: I1002 20:43:27.700323 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-run\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700524 kubelet[1940]: I1002 20:43:27.700340 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-hostproc\") pod \"14517ab4-0630-4cc4-b4f6-d38d30945409\" (UID: \"14517ab4-0630-4cc4-b4f6-d38d30945409\") " Oct 2 20:43:27.700524 kubelet[1940]: I1002 20:43:27.700375 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-hostproc" (OuterVolumeSpecName: "hostproc") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.700524 kubelet[1940]: I1002 20:43:27.700403 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.700524 kubelet[1940]: I1002 20:43:27.700418 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cni-path" (OuterVolumeSpecName: "cni-path") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.700524 kubelet[1940]: I1002 20:43:27.700432 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.700636 kubelet[1940]: I1002 20:43:27.700445 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.700636 kubelet[1940]: I1002 20:43:27.700459 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.700636 kubelet[1940]: W1002 20:43:27.700595 1940 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/14517ab4-0630-4cc4-b4f6-d38d30945409/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:43:27.702238 kubelet[1940]: I1002 20:43:27.702209 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:43:27.702841 kubelet[1940]: I1002 20:43:27.702458 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.702841 kubelet[1940]: I1002 20:43:27.702486 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.702841 kubelet[1940]: I1002 20:43:27.702502 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.702841 kubelet[1940]: I1002 20:43:27.702776 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:27.707207 systemd[1]: var-lib-kubelet-pods-14517ab4\x2d0630\x2d4cc4\x2db4f6\x2dd38d30945409-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnm5zs.mount: Deactivated successfully. Oct 2 20:43:27.708875 systemd[1]: var-lib-kubelet-pods-14517ab4\x2d0630\x2d4cc4\x2db4f6\x2dd38d30945409-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:43:27.710496 kubelet[1940]: I1002 20:43:27.709677 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14517ab4-0630-4cc4-b4f6-d38d30945409-kube-api-access-nm5zs" (OuterVolumeSpecName: "kube-api-access-nm5zs") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "kube-api-access-nm5zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:43:27.710673 kubelet[1940]: I1002 20:43:27.710652 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14517ab4-0630-4cc4-b4f6-d38d30945409-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:43:27.713754 systemd[1]: var-lib-kubelet-pods-14517ab4\x2d0630\x2d4cc4\x2db4f6\x2dd38d30945409-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:43:27.714743 kubelet[1940]: I1002 20:43:27.714711 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14517ab4-0630-4cc4-b4f6-d38d30945409-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "14517ab4-0630-4cc4-b4f6-d38d30945409" (UID: "14517ab4-0630-4cc4-b4f6-d38d30945409"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:43:27.802095 kubelet[1940]: I1002 20:43:27.800911 1940 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-run\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802095 kubelet[1940]: I1002 20:43:27.800938 1940 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-host-proc-sys-kernel\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802095 kubelet[1940]: I1002 20:43:27.800954 1940 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-etc-cni-netd\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802095 kubelet[1940]: I1002 20:43:27.800965 1940 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14517ab4-0630-4cc4-b4f6-d38d30945409-clustermesh-secrets\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802095 kubelet[1940]: I1002 20:43:27.800974 1940 reconciler.go:399] "Volume detached for volume \"kube-api-access-nm5zs\" (UniqueName: \"kubernetes.io/projected/14517ab4-0630-4cc4-b4f6-d38d30945409-kube-api-access-nm5zs\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802095 kubelet[1940]: I1002 20:43:27.800995 1940 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-hostproc\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802095 kubelet[1940]: I1002 20:43:27.801005 1940 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-lib-modules\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802095 kubelet[1940]: I1002 20:43:27.801013 1940 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cni-path\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802344 kubelet[1940]: I1002 20:43:27.801023 1940 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-bpf-maps\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802344 kubelet[1940]: I1002 20:43:27.801032 1940 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-host-proc-sys-net\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802344 kubelet[1940]: I1002 20:43:27.801042 1940 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-xtables-lock\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802344 kubelet[1940]: I1002 20:43:27.801051 1940 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-config-path\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802344 kubelet[1940]: I1002 20:43:27.801060 1940 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14517ab4-0630-4cc4-b4f6-d38d30945409-hubble-tls\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:27.802344 kubelet[1940]: I1002 20:43:27.801068 1940 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14517ab4-0630-4cc4-b4f6-d38d30945409-cilium-cgroup\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:28.003730 kubelet[1940]: E1002 20:43:28.003705 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:28.052090 kubelet[1940]: E1002 20:43:28.052068 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:28.403781 kubelet[1940]: I1002 20:43:28.403754 1940 scope.go:115] "RemoveContainer" containerID="36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d" Oct 2 20:43:28.407260 systemd[1]: Removed slice kubepods-burstable-pod14517ab4_0630_4cc4_b4f6_d38d30945409.slice. Oct 2 20:43:28.408615 env[1383]: time="2023-10-02T20:43:28.408583653Z" level=info msg="RemoveContainer for \"36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d\"" Oct 2 20:43:28.417964 env[1383]: time="2023-10-02T20:43:28.417933630Z" level=info msg="RemoveContainer for \"36134f7cf94861e929fb6770264606a1a07db229fb673752dfe1a5a70aa2fb1d\" returns successfully" Oct 2 20:43:28.428348 kubelet[1940]: I1002 20:43:28.428312 1940 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:43:28.428456 kubelet[1940]: E1002 20:43:28.428358 1940 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: E1002 20:43:28.428367 1940 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: E1002 20:43:28.428374 1940 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: E1002 20:43:28.428380 1940 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: E1002 20:43:28.428386 1940 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: I1002 20:43:28.428401 1940 memory_manager.go:345] "RemoveStaleState removing state" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: I1002 20:43:28.428407 1940 memory_manager.go:345] "RemoveStaleState removing state" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: I1002 20:43:28.428413 1940 memory_manager.go:345] "RemoveStaleState removing state" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: I1002 20:43:28.428418 1940 memory_manager.go:345] "RemoveStaleState removing state" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: I1002 20:43:28.428424 1940 memory_manager.go:345] "RemoveStaleState removing state" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: E1002 20:43:28.428434 1940 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.428456 kubelet[1940]: I1002 20:43:28.428446 1940 memory_manager.go:345] "RemoveStaleState removing state" podUID="14517ab4-0630-4cc4-b4f6-d38d30945409" containerName="mount-cgroup" Oct 2 20:43:28.432796 systemd[1]: Created slice kubepods-burstable-pod490d0e0a_a4b0_47d7_a30f_c82e91917098.slice. Oct 2 20:43:28.503937 kubelet[1940]: I1002 20:43:28.503910 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-xtables-lock\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504180 kubelet[1940]: I1002 20:43:28.504162 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/490d0e0a-a4b0-47d7-a30f-c82e91917098-clustermesh-secrets\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504298 kubelet[1940]: I1002 20:43:28.504288 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-hubble-tls\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504395 kubelet[1940]: I1002 20:43:28.504385 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-lib-modules\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504502 kubelet[1940]: I1002 20:43:28.504492 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2rmj\" (UniqueName: \"kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504601 kubelet[1940]: I1002 20:43:28.504591 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-hostproc\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504696 kubelet[1940]: I1002 20:43:28.504686 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-cgroup\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504788 kubelet[1940]: I1002 20:43:28.504779 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-run\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504880 kubelet[1940]: I1002 20:43:28.504870 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cni-path\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.504978 kubelet[1940]: I1002 20:43:28.504968 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-etc-cni-netd\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.505115 kubelet[1940]: I1002 20:43:28.505104 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-config-path\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.505210 kubelet[1940]: I1002 20:43:28.505201 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-host-proc-sys-net\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.505303 kubelet[1940]: I1002 20:43:28.505294 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-host-proc-sys-kernel\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.505393 kubelet[1940]: I1002 20:43:28.505384 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-bpf-maps\") pod \"cilium-v2ld4\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " pod="kube-system/cilium-v2ld4" Oct 2 20:43:28.610869 kubelet[1940]: E1002 20:43:28.610849 1940 projected.go:196] Error preparing data for projected volume kube-api-access-s2rmj for pod kube-system/cilium-v2ld4: failed to fetch token: serviceaccounts "cilium" not found Oct 2 20:43:28.611064 kubelet[1940]: E1002 20:43:28.611052 1940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj podName:490d0e0a-a4b0-47d7-a30f-c82e91917098 nodeName:}" failed. No retries permitted until 2023-10-02 20:43:29.111032828 +0000 UTC m=+217.245918648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2rmj" (UniqueName: "kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj") pod "cilium-v2ld4" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098") : failed to fetch token: serviceaccounts "cilium" not found Oct 2 20:43:29.052406 kubelet[1940]: E1002 20:43:29.052363 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:29.100372 kubelet[1940]: I1002 20:43:29.100352 1940 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=14517ab4-0630-4cc4-b4f6-d38d30945409 path="/var/lib/kubelet/pods/14517ab4-0630-4cc4-b4f6-d38d30945409/volumes" Oct 2 20:43:29.211273 kubelet[1940]: E1002 20:43:29.211252 1940 projected.go:196] Error preparing data for projected volume kube-api-access-s2rmj for pod kube-system/cilium-v2ld4: failed to fetch token: serviceaccounts "cilium" not found Oct 2 20:43:29.211433 kubelet[1940]: E1002 20:43:29.211423 1940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj podName:490d0e0a-a4b0-47d7-a30f-c82e91917098 nodeName:}" failed. No retries permitted until 2023-10-02 20:43:30.21140617 +0000 UTC m=+218.346292030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2rmj" (UniqueName: "kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj") pod "cilium-v2ld4" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098") : failed to fetch token: serviceaccounts "cilium" not found Oct 2 20:43:30.053228 kubelet[1940]: E1002 20:43:30.053196 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:30.215350 kubelet[1940]: E1002 20:43:30.215305 1940 projected.go:196] Error preparing data for projected volume kube-api-access-s2rmj for pod kube-system/cilium-v2ld4: failed to fetch token: serviceaccounts "cilium" not found Oct 2 20:43:30.215476 kubelet[1940]: E1002 20:43:30.215405 1940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj podName:490d0e0a-a4b0-47d7-a30f-c82e91917098 nodeName:}" failed. No retries permitted until 2023-10-02 20:43:32.2153905 +0000 UTC m=+220.350276360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2rmj" (UniqueName: "kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj") pod "cilium-v2ld4" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098") : failed to fetch token: serviceaccounts "cilium" not found Oct 2 20:43:31.054221 kubelet[1940]: E1002 20:43:31.054191 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:32.054958 kubelet[1940]: E1002 20:43:32.054924 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:32.225277 kubelet[1940]: E1002 20:43:32.225256 1940 projected.go:196] Error preparing data for projected volume kube-api-access-s2rmj for pod kube-system/cilium-v2ld4: failed to fetch token: serviceaccounts "cilium" not found Oct 2 20:43:32.225442 kubelet[1940]: E1002 20:43:32.225431 1940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj podName:490d0e0a-a4b0-47d7-a30f-c82e91917098 nodeName:}" failed. No retries permitted until 2023-10-02 20:43:36.225411618 +0000 UTC m=+224.360297478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2rmj" (UniqueName: "kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj") pod "cilium-v2ld4" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098") : failed to fetch token: serviceaccounts "cilium" not found Oct 2 20:43:32.574494 kubelet[1940]: I1002 20:43:32.574460 1940 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:43:32.578678 systemd[1]: Created slice kubepods-besteffort-pod814a94cd_f402_4a34_9fed_5a7e6df702f4.slice. Oct 2 20:43:32.625478 kubelet[1940]: I1002 20:43:32.625447 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnvjp\" (UniqueName: \"kubernetes.io/projected/814a94cd-f402-4a34-9fed-5a7e6df702f4-kube-api-access-mnvjp\") pod \"cilium-operator-69b677f97c-wq7b4\" (UID: \"814a94cd-f402-4a34-9fed-5a7e6df702f4\") " pod="kube-system/cilium-operator-69b677f97c-wq7b4" Oct 2 20:43:32.625587 kubelet[1940]: I1002 20:43:32.625505 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/814a94cd-f402-4a34-9fed-5a7e6df702f4-cilium-config-path\") pod \"cilium-operator-69b677f97c-wq7b4\" (UID: \"814a94cd-f402-4a34-9fed-5a7e6df702f4\") " pod="kube-system/cilium-operator-69b677f97c-wq7b4" Oct 2 20:43:32.881797 env[1383]: time="2023-10-02T20:43:32.881439782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-wq7b4,Uid:814a94cd-f402-4a34-9fed-5a7e6df702f4,Namespace:kube-system,Attempt:0,}" Oct 2 20:43:32.888366 kubelet[1940]: E1002 20:43:32.888349 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:32.935134 env[1383]: time="2023-10-02T20:43:32.935057923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:43:32.935134 env[1383]: time="2023-10-02T20:43:32.935100404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:43:32.935134 env[1383]: time="2023-10-02T20:43:32.935111605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:43:32.935512 env[1383]: time="2023-10-02T20:43:32.935468092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64 pid=2542 runtime=io.containerd.runc.v2 Oct 2 20:43:32.954733 systemd[1]: Started cri-containerd-8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64.scope. Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.003232 kernel: audit: type=1400 audit(1696279412.969:670): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.003319 kernel: audit: type=1400 audit(1696279412.969:671): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.019336 kernel: audit: type=1400 audit(1696279412.969:672): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.035655 kernel: audit: type=1400 audit(1696279412.969:673): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.035771 kubelet[1940]: E1002 20:43:33.035491 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.052743 kernel: audit: type=1400 audit(1696279412.969:674): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.070165 kernel: audit: type=1400 audit(1696279412.969:675): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.071346 kubelet[1940]: E1002 20:43:33.071297 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.081549 env[1383]: time="2023-10-02T20:43:33.081513805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-wq7b4,Uid:814a94cd-f402-4a34-9fed-5a7e6df702f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\"" Oct 2 20:43:33.088437 kernel: audit: type=1400 audit(1696279412.969:676): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.104716 kernel: audit: type=1400 audit(1696279412.969:677): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.121693 kernel: audit: type=1400 audit(1696279412.969:678): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.969000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:33.122498 env[1383]: time="2023-10-02T20:43:33.122466098Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 20:43:33.137798 kernel: audit: type=1400 audit(1696279412.969:679): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.969000 audit: BPF prog-id=79 op=LOAD Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400014db38 a2=10 a3=0 items=0 ppid=2542 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:32.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864396362633231313737623933646238373037326463656564623339 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=400014d5a0 a2=3c a3=0 items=0 ppid=2542 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:32.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864396362633231313737623933646238373037326463656564623339 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.975000 audit: BPF prog-id=80 op=LOAD Oct 2 20:43:32.975000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400014d8e0 a2=78 a3=0 items=0 ppid=2542 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:32.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864396362633231313737623933646238373037326463656564623339 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit: BPF prog-id=81 op=LOAD Oct 2 20:43:32.986000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400014d670 a2=78 a3=0 items=0 ppid=2542 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:32.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864396362633231313737623933646238373037326463656564623339 Oct 2 20:43:32.986000 audit: BPF prog-id=81 op=UNLOAD Oct 2 20:43:32.986000 audit: BPF prog-id=80 op=UNLOAD Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { perfmon } for pid=2553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit[2553]: AVC avc: denied { bpf } for pid=2553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:32.986000 audit: BPF prog-id=82 op=LOAD Oct 2 20:43:32.986000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400014db40 a2=78 a3=0 items=0 ppid=2542 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:32.986000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864396362633231313737623933646238373037326463656564623339 Oct 2 20:43:33.738425 systemd[1]: run-containerd-runc-k8s.io-8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64-runc.rGD9oR.mount: Deactivated successfully. Oct 2 20:43:34.072498 kubelet[1940]: E1002 20:43:34.072199 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:34.596920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3998943933.mount: Deactivated successfully. Oct 2 20:43:35.073319 kubelet[1940]: E1002 20:43:35.073263 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:35.330367 env[1383]: time="2023-10-02T20:43:35.330060816Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:43:35.336373 env[1383]: time="2023-10-02T20:43:35.336348622Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:43:35.340883 env[1383]: time="2023-10-02T20:43:35.340843231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:43:35.341455 env[1383]: time="2023-10-02T20:43:35.341422603Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 20:43:35.343664 env[1383]: time="2023-10-02T20:43:35.343638167Z" level=info msg="CreateContainer within sandbox \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 20:43:35.365378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586859628.mount: Deactivated successfully. Oct 2 20:43:35.370675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306885820.mount: Deactivated successfully. Oct 2 20:43:35.386041 env[1383]: time="2023-10-02T20:43:35.385950731Z" level=info msg="CreateContainer within sandbox \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\"" Oct 2 20:43:35.386775 env[1383]: time="2023-10-02T20:43:35.386717586Z" level=info msg="StartContainer for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\"" Oct 2 20:43:35.413432 systemd[1]: Started cri-containerd-ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2.scope. Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.429000 audit: BPF prog-id=83 op=LOAD Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2542 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:35.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365373564653439323333666262646163303633393433356438653934 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2542 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:35.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365373564653439323333666262646163303633393433356438653934 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.430000 audit: BPF prog-id=84 op=LOAD Oct 2 20:43:35.430000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2542 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:35.430000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365373564653439323333666262646163303633393433356438653934 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.431000 audit: BPF prog-id=85 op=LOAD Oct 2 20:43:35.431000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2542 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:35.431000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365373564653439323333666262646163303633393433356438653934 Oct 2 20:43:35.431000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:43:35.432000 audit: BPF prog-id=84 op=UNLOAD Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { perfmon } for pid=2584 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit[2584]: AVC avc: denied { bpf } for pid=2584 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:35.432000 audit: BPF prog-id=86 op=LOAD Oct 2 20:43:35.432000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2542 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:35.432000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6365373564653439323333666262646163303633393433356438653934 Oct 2 20:43:35.452403 env[1383]: time="2023-10-02T20:43:35.452343774Z" level=info msg="StartContainer for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" returns successfully" Oct 2 20:43:35.482000 audit[2594]: AVC avc: denied { map_create } for pid=2594 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c55,c481 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c55,c481 tclass=bpf permissive=0 Oct 2 20:43:35.482000 audit[2594]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400051f768 a2=48 a3=0 items=0 ppid=2542 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c55,c481 key=(null) Oct 2 20:43:35.482000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 20:43:36.074038 kubelet[1940]: E1002 20:43:36.074002 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:36.539280 env[1383]: time="2023-10-02T20:43:36.539220927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2ld4,Uid:490d0e0a-a4b0-47d7-a30f-c82e91917098,Namespace:kube-system,Attempt:0,}" Oct 2 20:43:36.626700 env[1383]: time="2023-10-02T20:43:36.626500189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:43:36.626700 env[1383]: time="2023-10-02T20:43:36.626538710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:43:36.626700 env[1383]: time="2023-10-02T20:43:36.626548510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:43:36.626911 env[1383]: time="2023-10-02T20:43:36.626718634Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb pid=2621 runtime=io.containerd.runc.v2 Oct 2 20:43:36.645582 systemd[1]: Started cri-containerd-7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb.scope. Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.657000 audit: BPF prog-id=87 op=LOAD Oct 2 20:43:36.658000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.658000 audit[2631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2621 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:36.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343861366436633932656531343435376266613535303066356263 Oct 2 20:43:36.658000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.658000 audit[2631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2621 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:36.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343861366436633932656531343435376266613535303066356263 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit: BPF prog-id=88 op=LOAD Oct 2 20:43:36.659000 audit[2631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2621 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:36.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343861366436633932656531343435376266613535303066356263 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.659000 audit: BPF prog-id=89 op=LOAD Oct 2 20:43:36.659000 audit[2631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2621 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:36.659000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343861366436633932656531343435376266613535303066356263 Oct 2 20:43:36.660000 audit: BPF prog-id=89 op=UNLOAD Oct 2 20:43:36.660000 audit: BPF prog-id=88 op=UNLOAD Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { perfmon } for pid=2631 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit[2631]: AVC avc: denied { bpf } for pid=2631 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:36.660000 audit: BPF prog-id=90 op=LOAD Oct 2 20:43:36.660000 audit[2631]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2621 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:36.660000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343861366436633932656531343435376266613535303066356263 Oct 2 20:43:36.676193 env[1383]: time="2023-10-02T20:43:36.676150517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2ld4,Uid:490d0e0a-a4b0-47d7-a30f-c82e91917098,Namespace:kube-system,Attempt:0,} returns sandbox id \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\"" Oct 2 20:43:36.678463 env[1383]: time="2023-10-02T20:43:36.678428482Z" level=info msg="CreateContainer within sandbox \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:43:36.719692 env[1383]: time="2023-10-02T20:43:36.719637725Z" level=info msg="CreateContainer within sandbox \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82\"" Oct 2 20:43:36.720205 env[1383]: time="2023-10-02T20:43:36.720126295Z" level=info msg="StartContainer for \"1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82\"" Oct 2 20:43:36.738845 systemd[1]: Started cri-containerd-1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82.scope. Oct 2 20:43:36.752365 systemd[1]: cri-containerd-1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82.scope: Deactivated successfully. Oct 2 20:43:37.075188 kubelet[1940]: E1002 20:43:37.075118 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:37.090321 env[1383]: time="2023-10-02T20:43:37.090256917Z" level=info msg="shim disconnected" id=1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82 Oct 2 20:43:37.090489 env[1383]: time="2023-10-02T20:43:37.090472001Z" level=warning msg="cleaning up after shim disconnected" id=1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82 namespace=k8s.io Oct 2 20:43:37.090570 env[1383]: time="2023-10-02T20:43:37.090556162Z" level=info msg="cleaning up dead shim" Oct 2 20:43:37.105429 env[1383]: time="2023-10-02T20:43:37.105385765Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:43:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2680 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:43:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:43:37.105824 env[1383]: time="2023-10-02T20:43:37.105777093Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:43:37.108080 env[1383]: time="2023-10-02T20:43:37.108033656Z" level=error msg="Failed to pipe stderr of container \"1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82\"" error="reading from a closed fifo" Oct 2 20:43:37.108214 env[1383]: time="2023-10-02T20:43:37.108181578Z" level=error msg="Failed to pipe stdout of container \"1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82\"" error="reading from a closed fifo" Oct 2 20:43:37.112966 env[1383]: time="2023-10-02T20:43:37.112923109Z" level=error msg="StartContainer for \"1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:43:37.113518 kubelet[1940]: E1002 20:43:37.113149 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82" Oct 2 20:43:37.113518 kubelet[1940]: E1002 20:43:37.113259 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:43:37.113518 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:43:37.113518 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:43:37.113729 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-s2rmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-v2ld4_kube-system(490d0e0a-a4b0-47d7-a30f-c82e91917098): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:43:37.113806 kubelet[1940]: E1002 20:43:37.113291 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-v2ld4" podUID=490d0e0a-a4b0-47d7-a30f-c82e91917098 Oct 2 20:43:37.363681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745370238.mount: Deactivated successfully. Oct 2 20:43:37.427381 env[1383]: time="2023-10-02T20:43:37.427336105Z" level=info msg="StopPodSandbox for \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\"" Oct 2 20:43:37.429349 env[1383]: time="2023-10-02T20:43:37.427388946Z" level=info msg="Container to stop \"1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:43:37.428682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb-shm.mount: Deactivated successfully. Oct 2 20:43:37.436000 audit: BPF prog-id=87 op=UNLOAD Oct 2 20:43:37.437527 systemd[1]: cri-containerd-7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb.scope: Deactivated successfully. Oct 2 20:43:37.440000 audit: BPF prog-id=90 op=UNLOAD Oct 2 20:43:37.469663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb-rootfs.mount: Deactivated successfully. Oct 2 20:43:37.488407 env[1383]: time="2023-10-02T20:43:37.488362989Z" level=info msg="shim disconnected" id=7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb Oct 2 20:43:37.488646 env[1383]: time="2023-10-02T20:43:37.488627354Z" level=warning msg="cleaning up after shim disconnected" id=7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb namespace=k8s.io Oct 2 20:43:37.488745 env[1383]: time="2023-10-02T20:43:37.488731236Z" level=info msg="cleaning up dead shim" Oct 2 20:43:37.501283 env[1383]: time="2023-10-02T20:43:37.501246314Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:43:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2710 runtime=io.containerd.runc.v2\n" Oct 2 20:43:37.501703 env[1383]: time="2023-10-02T20:43:37.501677483Z" level=info msg="TearDown network for sandbox \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\" successfully" Oct 2 20:43:37.501799 env[1383]: time="2023-10-02T20:43:37.501781805Z" level=info msg="StopPodSandbox for \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\" returns successfully" Oct 2 20:43:37.558992 kubelet[1940]: I1002 20:43:37.558960 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-xtables-lock\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559113 kubelet[1940]: I1002 20:43:37.559006 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-hostproc\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559113 kubelet[1940]: I1002 20:43:37.559024 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cni-path\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559113 kubelet[1940]: I1002 20:43:37.559051 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-config-path\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559113 kubelet[1940]: I1002 20:43:37.559070 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-cgroup\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559113 kubelet[1940]: I1002 20:43:37.559088 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-host-proc-sys-kernel\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559113 kubelet[1940]: I1002 20:43:37.559107 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-host-proc-sys-net\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559262 kubelet[1940]: I1002 20:43:37.559123 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-bpf-maps\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559262 kubelet[1940]: I1002 20:43:37.559140 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-lib-modules\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559262 kubelet[1940]: I1002 20:43:37.559155 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-etc-cni-netd\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559262 kubelet[1940]: I1002 20:43:37.559175 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-hubble-tls\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559262 kubelet[1940]: I1002 20:43:37.559195 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2rmj\" (UniqueName: \"kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559262 kubelet[1940]: I1002 20:43:37.559213 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-run\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.559396 kubelet[1940]: I1002 20:43:37.559233 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/490d0e0a-a4b0-47d7-a30f-c82e91917098-clustermesh-secrets\") pod \"490d0e0a-a4b0-47d7-a30f-c82e91917098\" (UID: \"490d0e0a-a4b0-47d7-a30f-c82e91917098\") " Oct 2 20:43:37.560013 kubelet[1940]: I1002 20:43:37.559465 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.560013 kubelet[1940]: I1002 20:43:37.559504 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.560013 kubelet[1940]: I1002 20:43:37.559520 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-hostproc" (OuterVolumeSpecName: "hostproc") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.560013 kubelet[1940]: I1002 20:43:37.559535 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cni-path" (OuterVolumeSpecName: "cni-path") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.560013 kubelet[1940]: W1002 20:43:37.559657 1940 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/490d0e0a-a4b0-47d7-a30f-c82e91917098/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:43:37.560193 kubelet[1940]: I1002 20:43:37.559800 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.560193 kubelet[1940]: I1002 20:43:37.559824 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.560193 kubelet[1940]: I1002 20:43:37.559838 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.561950 kubelet[1940]: I1002 20:43:37.560289 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.561950 kubelet[1940]: I1002 20:43:37.560320 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.561950 kubelet[1940]: I1002 20:43:37.560511 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:43:37.561950 kubelet[1940]: I1002 20:43:37.561919 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:43:37.564481 systemd[1]: var-lib-kubelet-pods-490d0e0a\x2da4b0\x2d47d7\x2da30f\x2dc82e91917098-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:43:37.565925 kubelet[1940]: I1002 20:43:37.565887 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/490d0e0a-a4b0-47d7-a30f-c82e91917098-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:43:37.567764 systemd[1]: var-lib-kubelet-pods-490d0e0a\x2da4b0\x2d47d7\x2da30f\x2dc82e91917098-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds2rmj.mount: Deactivated successfully. Oct 2 20:43:37.570150 kubelet[1940]: I1002 20:43:37.570125 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj" (OuterVolumeSpecName: "kube-api-access-s2rmj") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "kube-api-access-s2rmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:43:37.571333 kubelet[1940]: I1002 20:43:37.571311 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "490d0e0a-a4b0-47d7-a30f-c82e91917098" (UID: "490d0e0a-a4b0-47d7-a30f-c82e91917098"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:43:37.659637 kubelet[1940]: I1002 20:43:37.659606 1940 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-lib-modules\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659771 kubelet[1940]: I1002 20:43:37.659645 1940 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-etc-cni-netd\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659771 kubelet[1940]: I1002 20:43:37.659657 1940 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-host-proc-sys-net\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659771 kubelet[1940]: I1002 20:43:37.659667 1940 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-bpf-maps\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659771 kubelet[1940]: I1002 20:43:37.659677 1940 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/490d0e0a-a4b0-47d7-a30f-c82e91917098-clustermesh-secrets\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659771 kubelet[1940]: I1002 20:43:37.659685 1940 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-hubble-tls\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659771 kubelet[1940]: I1002 20:43:37.659696 1940 reconciler.go:399] "Volume detached for volume \"kube-api-access-s2rmj\" (UniqueName: \"kubernetes.io/projected/490d0e0a-a4b0-47d7-a30f-c82e91917098-kube-api-access-s2rmj\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659771 kubelet[1940]: I1002 20:43:37.659707 1940 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-run\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659771 kubelet[1940]: I1002 20:43:37.659716 1940 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-xtables-lock\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659962 kubelet[1940]: I1002 20:43:37.659725 1940 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-hostproc\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659962 kubelet[1940]: I1002 20:43:37.659734 1940 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cni-path\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659962 kubelet[1940]: I1002 20:43:37.659743 1940 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-config-path\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659962 kubelet[1940]: I1002 20:43:37.659752 1940 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-cilium-cgroup\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:37.659962 kubelet[1940]: I1002 20:43:37.659761 1940 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/490d0e0a-a4b0-47d7-a30f-c82e91917098-host-proc-sys-kernel\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:43:38.036942 kubelet[1940]: E1002 20:43:38.036844 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:38.075527 kubelet[1940]: E1002 20:43:38.075504 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:38.363698 systemd[1]: var-lib-kubelet-pods-490d0e0a\x2da4b0\x2d47d7\x2da30f\x2dc82e91917098-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:43:38.428675 kubelet[1940]: I1002 20:43:38.428654 1940 scope.go:115] "RemoveContainer" containerID="1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82" Oct 2 20:43:38.429682 env[1383]: time="2023-10-02T20:43:38.429643958Z" level=info msg="RemoveContainer for \"1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82\"" Oct 2 20:43:38.433554 systemd[1]: Removed slice kubepods-burstable-pod490d0e0a_a4b0_47d7_a30f_c82e91917098.slice. Oct 2 20:43:38.445134 kubelet[1940]: I1002 20:43:38.445109 1940 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:43:38.445339 kubelet[1940]: E1002 20:43:38.445323 1940 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="490d0e0a-a4b0-47d7-a30f-c82e91917098" containerName="mount-cgroup" Oct 2 20:43:38.445423 kubelet[1940]: I1002 20:43:38.445414 1940 memory_manager.go:345] "RemoveStaleState removing state" podUID="490d0e0a-a4b0-47d7-a30f-c82e91917098" containerName="mount-cgroup" Oct 2 20:43:38.445787 env[1383]: time="2023-10-02T20:43:38.445734978Z" level=info msg="RemoveContainer for \"1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82\" returns successfully" Oct 2 20:43:38.450137 systemd[1]: Created slice kubepods-burstable-pod32b78d5d_1b1b_466f_87a9_3fe093940a84.slice. Oct 2 20:43:38.464327 kubelet[1940]: I1002 20:43:38.464299 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-bpf-maps\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.464459 kubelet[1940]: I1002 20:43:38.464448 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-lib-modules\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.464552 kubelet[1940]: I1002 20:43:38.464541 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32b78d5d-1b1b-466f-87a9-3fe093940a84-hubble-tls\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.464646 kubelet[1940]: I1002 20:43:38.464636 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-etc-cni-netd\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.464739 kubelet[1940]: I1002 20:43:38.464730 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-ipsec-secrets\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.464834 kubelet[1940]: I1002 20:43:38.464825 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-host-proc-sys-net\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.464925 kubelet[1940]: I1002 20:43:38.464915 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-xtables-lock\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.465040 kubelet[1940]: I1002 20:43:38.465029 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32b78d5d-1b1b-466f-87a9-3fe093940a84-clustermesh-secrets\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.465157 kubelet[1940]: I1002 20:43:38.465146 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-config-path\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.465264 kubelet[1940]: I1002 20:43:38.465252 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-run\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.465365 kubelet[1940]: I1002 20:43:38.465354 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-hostproc\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.465469 kubelet[1940]: I1002 20:43:38.465459 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cni-path\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.465564 kubelet[1940]: I1002 20:43:38.465554 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-cgroup\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.465717 kubelet[1940]: I1002 20:43:38.465705 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-host-proc-sys-kernel\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.465821 kubelet[1940]: I1002 20:43:38.465810 1940 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftsr8\" (UniqueName: \"kubernetes.io/projected/32b78d5d-1b1b-466f-87a9-3fe093940a84-kube-api-access-ftsr8\") pod \"cilium-krz5t\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " pod="kube-system/cilium-krz5t" Oct 2 20:43:38.761515 env[1383]: time="2023-10-02T20:43:38.761139820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krz5t,Uid:32b78d5d-1b1b-466f-87a9-3fe093940a84,Namespace:kube-system,Attempt:0,}" Oct 2 20:43:38.808912 env[1383]: time="2023-10-02T20:43:38.808861310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:43:38.809117 env[1383]: time="2023-10-02T20:43:38.809091914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:43:38.809214 env[1383]: time="2023-10-02T20:43:38.809193396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:43:38.809554 env[1383]: time="2023-10-02T20:43:38.809505682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e pid=2737 runtime=io.containerd.runc.v2 Oct 2 20:43:38.822605 systemd[1]: Started cri-containerd-d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e.scope. Oct 2 20:43:38.842773 kernel: kauditd_printk_skb: 166 callbacks suppressed Oct 2 20:43:38.842868 kernel: audit: type=1400 audit(1696279418.837:727): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.873806 kernel: audit: type=1400 audit(1696279418.837:728): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.889409 kernel: audit: type=1400 audit(1696279418.837:729): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.905950 kernel: audit: type=1400 audit(1696279418.837:730): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.921619 kernel: audit: type=1400 audit(1696279418.837:731): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.937756 kernel: audit: type=1400 audit(1696279418.837:732): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.938021 kernel: audit: type=1400 audit(1696279418.837:733): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.955691 env[1383]: time="2023-10-02T20:43:38.955657847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krz5t,Uid:32b78d5d-1b1b-466f-87a9-3fe093940a84,Namespace:kube-system,Attempt:0,} returns sandbox id \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\"" Oct 2 20:43:38.958746 env[1383]: time="2023-10-02T20:43:38.958718064Z" level=info msg="CreateContainer within sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:43:38.970784 kernel: audit: type=1400 audit(1696279418.837:734): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.970881 kernel: audit: type=1400 audit(1696279418.837:735): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.837000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:39.001737 kernel: audit: type=1400 audit(1696279418.842:736): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit: BPF prog-id=91 op=LOAD Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400014db38 a2=10 a3=0 items=0 ppid=2737 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:38.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346263626561363732666665623361353462356362666332616561 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400014d5a0 a2=3c a3=0 items=0 ppid=2737 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:38.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346263626561363732666665623361353462356362666332616561 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.842000 audit: BPF prog-id=92 op=LOAD Oct 2 20:43:38.842000 audit[2748]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014d8e0 a2=78 a3=0 items=0 ppid=2737 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:38.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346263626561363732666665623361353462356362666332616561 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.857000 audit: BPF prog-id=93 op=LOAD Oct 2 20:43:38.857000 audit[2748]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400014d670 a2=78 a3=0 items=0 ppid=2737 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:38.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346263626561363732666665623361353462356362666332616561 Oct 2 20:43:38.872000 audit: BPF prog-id=93 op=UNLOAD Oct 2 20:43:38.872000 audit: BPF prog-id=92 op=UNLOAD Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { perfmon } for pid=2748 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit[2748]: AVC avc: denied { bpf } for pid=2748 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:43:38.872000 audit: BPF prog-id=94 op=LOAD Oct 2 20:43:38.872000 audit[2748]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400014db40 a2=78 a3=0 items=0 ppid=2737 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:43:38.872000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436346263626561363732666665623361353462356362666332616561 Oct 2 20:43:39.043636 env[1383]: time="2023-10-02T20:43:39.043522468Z" level=info msg="CreateContainer within sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\"" Oct 2 20:43:39.045972 env[1383]: time="2023-10-02T20:43:39.045939952Z" level=info msg="StartContainer for \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\"" Oct 2 20:43:39.064059 systemd[1]: Started cri-containerd-544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0.scope. Oct 2 20:43:39.075309 systemd[1]: cri-containerd-544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0.scope: Deactivated successfully. Oct 2 20:43:39.076204 kubelet[1940]: E1002 20:43:39.076139 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:39.123687 kubelet[1940]: I1002 20:43:39.123454 1940 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=490d0e0a-a4b0-47d7-a30f-c82e91917098 path="/var/lib/kubelet/pods/490d0e0a-a4b0-47d7-a30f-c82e91917098/volumes" Oct 2 20:43:39.170705 env[1383]: time="2023-10-02T20:43:39.170650986Z" level=info msg="shim disconnected" id=544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0 Oct 2 20:43:39.170705 env[1383]: time="2023-10-02T20:43:39.170703947Z" level=warning msg="cleaning up after shim disconnected" id=544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0 namespace=k8s.io Oct 2 20:43:39.170865 env[1383]: time="2023-10-02T20:43:39.170712587Z" level=info msg="cleaning up dead shim" Oct 2 20:43:39.181809 env[1383]: time="2023-10-02T20:43:39.181756148Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2797 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:43:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:43:39.182068 env[1383]: time="2023-10-02T20:43:39.182011553Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:43:39.184075 env[1383]: time="2023-10-02T20:43:39.184039270Z" level=error msg="Failed to pipe stderr of container \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\"" error="reading from a closed fifo" Oct 2 20:43:39.184226 env[1383]: time="2023-10-02T20:43:39.184193072Z" level=error msg="Failed to pipe stdout of container \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\"" error="reading from a closed fifo" Oct 2 20:43:39.198825 env[1383]: time="2023-10-02T20:43:39.198783218Z" level=error msg="StartContainer for \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:43:39.199149 kubelet[1940]: E1002 20:43:39.199115 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0" Oct 2 20:43:39.199435 kubelet[1940]: E1002 20:43:39.199232 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:43:39.199435 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:43:39.199435 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:43:39.199435 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ftsr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:43:39.199650 kubelet[1940]: E1002 20:43:39.199284 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:43:39.434932 env[1383]: time="2023-10-02T20:43:39.434897563Z" level=info msg="CreateContainer within sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:43:39.464299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2230457370.mount: Deactivated successfully. Oct 2 20:43:39.486546 env[1383]: time="2023-10-02T20:43:39.486503344Z" level=info msg="CreateContainer within sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\"" Oct 2 20:43:39.486969 env[1383]: time="2023-10-02T20:43:39.486944912Z" level=info msg="StartContainer for \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\"" Oct 2 20:43:39.504732 systemd[1]: Started cri-containerd-47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b.scope. Oct 2 20:43:39.518443 systemd[1]: cri-containerd-47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b.scope: Deactivated successfully. Oct 2 20:43:39.548314 env[1383]: time="2023-10-02T20:43:39.548259990Z" level=info msg="shim disconnected" id=47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b Oct 2 20:43:39.548314 env[1383]: time="2023-10-02T20:43:39.548312231Z" level=warning msg="cleaning up after shim disconnected" id=47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b namespace=k8s.io Oct 2 20:43:39.548314 env[1383]: time="2023-10-02T20:43:39.548321751Z" level=info msg="cleaning up dead shim" Oct 2 20:43:39.559498 env[1383]: time="2023-10-02T20:43:39.559452434Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:43:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2835 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:43:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:43:39.559753 env[1383]: time="2023-10-02T20:43:39.559691918Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:43:39.560069 env[1383]: time="2023-10-02T20:43:39.560033085Z" level=error msg="Failed to pipe stdout of container \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\"" error="reading from a closed fifo" Oct 2 20:43:39.560361 env[1383]: time="2023-10-02T20:43:39.560153847Z" level=error msg="Failed to pipe stderr of container \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\"" error="reading from a closed fifo" Oct 2 20:43:39.566411 env[1383]: time="2023-10-02T20:43:39.566367800Z" level=error msg="StartContainer for \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:43:39.567009 kubelet[1940]: E1002 20:43:39.566666 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b" Oct 2 20:43:39.567009 kubelet[1940]: E1002 20:43:39.566767 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:43:39.567009 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:43:39.567009 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:43:39.567176 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ftsr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:43:39.567230 kubelet[1940]: E1002 20:43:39.566802 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:43:40.076992 kubelet[1940]: E1002 20:43:40.076949 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:40.196308 kubelet[1940]: W1002 20:43:40.196271 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490d0e0a_a4b0_47d7_a30f_c82e91917098.slice/cri-containerd-1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82.scope WatchSource:0}: container "1edb28b2e82f80fb77519693a1ee0730a1f302a6423478c72cdf01b172ea7a82" in namespace "k8s.io": not found Oct 2 20:43:40.435567 kubelet[1940]: I1002 20:43:40.435542 1940 scope.go:115] "RemoveContainer" containerID="544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0" Oct 2 20:43:40.435970 kubelet[1940]: I1002 20:43:40.435924 1940 scope.go:115] "RemoveContainer" containerID="544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0" Oct 2 20:43:40.437371 env[1383]: time="2023-10-02T20:43:40.437338113Z" level=info msg="RemoveContainer for \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\"" Oct 2 20:43:40.438756 env[1383]: time="2023-10-02T20:43:40.437338193Z" level=info msg="RemoveContainer for \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\"" Oct 2 20:43:40.438950 env[1383]: time="2023-10-02T20:43:40.438918382Z" level=error msg="RemoveContainer for \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\" failed" error="failed to set removing state for container \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\": container is already in removing state" Oct 2 20:43:40.439223 kubelet[1940]: E1002 20:43:40.439206 1940 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\": container is already in removing state" containerID="544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0" Oct 2 20:43:40.439284 kubelet[1940]: I1002 20:43:40.439239 1940 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0} err="rpc error: code = Unknown desc = failed to set removing state for container \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\": container is already in removing state" Oct 2 20:43:40.458831 env[1383]: time="2023-10-02T20:43:40.458797016Z" level=info msg="RemoveContainer for \"544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0\" returns successfully" Oct 2 20:43:40.459323 kubelet[1940]: E1002 20:43:40.459284 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84)\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:43:41.077178 kubelet[1940]: E1002 20:43:41.077144 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:41.439013 kubelet[1940]: E1002 20:43:41.438697 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84)\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:43:42.078290 kubelet[1940]: E1002 20:43:42.078258 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:43.037231 kubelet[1940]: E1002 20:43:43.037208 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:43.079480 kubelet[1940]: E1002 20:43:43.079462 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:43.304611 kubelet[1940]: W1002 20:43:43.304510 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b78d5d_1b1b_466f_87a9_3fe093940a84.slice/cri-containerd-544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0.scope WatchSource:0}: container "544e718737a0af76d88b36615ddc571e671be15de4b1bc962776e896934f6cd0" in namespace "k8s.io": not found Oct 2 20:43:44.080043 kubelet[1940]: E1002 20:43:44.080010 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:45.080601 kubelet[1940]: E1002 20:43:45.080570 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:46.081330 kubelet[1940]: E1002 20:43:46.081294 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:46.410238 kubelet[1940]: W1002 20:43:46.410210 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b78d5d_1b1b_466f_87a9_3fe093940a84.slice/cri-containerd-47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b.scope WatchSource:0}: task 47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b not found: not found Oct 2 20:43:47.081736 kubelet[1940]: E1002 20:43:47.081699 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:48.038822 kubelet[1940]: E1002 20:43:48.038784 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:48.082019 kubelet[1940]: E1002 20:43:48.081979 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:49.082809 kubelet[1940]: E1002 20:43:49.082777 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:50.083434 kubelet[1940]: E1002 20:43:50.083403 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:51.084354 kubelet[1940]: E1002 20:43:51.084324 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:52.085088 kubelet[1940]: E1002 20:43:52.085049 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:52.099763 env[1383]: time="2023-10-02T20:43:52.099719470Z" level=info msg="CreateContainer within sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:43:52.124069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount872603560.mount: Deactivated successfully. Oct 2 20:43:52.128300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877726350.mount: Deactivated successfully. Oct 2 20:43:52.142743 env[1383]: time="2023-10-02T20:43:52.142708805Z" level=info msg="CreateContainer within sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\"" Oct 2 20:43:52.143200 env[1383]: time="2023-10-02T20:43:52.143177771Z" level=info msg="StartContainer for \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\"" Oct 2 20:43:52.163426 systemd[1]: Started cri-containerd-559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f.scope. Oct 2 20:43:52.177417 systemd[1]: cri-containerd-559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f.scope: Deactivated successfully. Oct 2 20:43:52.208584 env[1383]: time="2023-10-02T20:43:52.208531926Z" level=info msg="shim disconnected" id=559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f Oct 2 20:43:52.208833 env[1383]: time="2023-10-02T20:43:52.208805489Z" level=warning msg="cleaning up after shim disconnected" id=559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f namespace=k8s.io Oct 2 20:43:52.208924 env[1383]: time="2023-10-02T20:43:52.208909611Z" level=info msg="cleaning up dead shim" Oct 2 20:43:52.220094 env[1383]: time="2023-10-02T20:43:52.220060240Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:43:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2876 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:43:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:43:52.220420 env[1383]: time="2023-10-02T20:43:52.220377244Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:43:52.220833 env[1383]: time="2023-10-02T20:43:52.220626807Z" level=error msg="Failed to pipe stdout of container \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\"" error="reading from a closed fifo" Oct 2 20:43:52.220953 env[1383]: time="2023-10-02T20:43:52.220630168Z" level=error msg="Failed to pipe stderr of container \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\"" error="reading from a closed fifo" Oct 2 20:43:52.225486 env[1383]: time="2023-10-02T20:43:52.225449432Z" level=error msg="StartContainer for \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:43:52.225712 kubelet[1940]: E1002 20:43:52.225690 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f" Oct 2 20:43:52.225796 kubelet[1940]: E1002 20:43:52.225782 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:43:52.225796 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:43:52.225796 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:43:52.225796 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ftsr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:43:52.225913 kubelet[1940]: E1002 20:43:52.225816 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:43:52.456247 kubelet[1940]: I1002 20:43:52.456223 1940 scope.go:115] "RemoveContainer" containerID="47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b" Oct 2 20:43:52.456558 kubelet[1940]: I1002 20:43:52.456540 1940 scope.go:115] "RemoveContainer" containerID="47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b" Oct 2 20:43:52.458012 env[1383]: time="2023-10-02T20:43:52.457966623Z" level=info msg="RemoveContainer for \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\"" Oct 2 20:43:52.458367 env[1383]: time="2023-10-02T20:43:52.458346588Z" level=info msg="RemoveContainer for \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\"" Oct 2 20:43:52.458552 env[1383]: time="2023-10-02T20:43:52.458523190Z" level=error msg="RemoveContainer for \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\" failed" error="failed to set removing state for container \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\": container is already in removing state" Oct 2 20:43:52.458738 kubelet[1940]: E1002 20:43:52.458718 1940 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\": container is already in removing state" containerID="47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b" Oct 2 20:43:52.458800 kubelet[1940]: I1002 20:43:52.458752 1940 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b} err="rpc error: code = Unknown desc = failed to set removing state for container \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\": container is already in removing state" Oct 2 20:43:52.469262 env[1383]: time="2023-10-02T20:43:52.469226173Z" level=info msg="RemoveContainer for \"47606f346a963d03394c0c7e075f4872b38103a5fa629708658e26e4c5e64a4b\" returns successfully" Oct 2 20:43:52.469613 kubelet[1940]: E1002 20:43:52.469593 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84)\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:43:52.888297 kubelet[1940]: E1002 20:43:52.888273 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:52.903762 env[1383]: time="2023-10-02T20:43:52.903732067Z" level=info msg="StopPodSandbox for \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\"" Oct 2 20:43:52.903976 env[1383]: time="2023-10-02T20:43:52.903936069Z" level=info msg="TearDown network for sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" successfully" Oct 2 20:43:52.904073 env[1383]: time="2023-10-02T20:43:52.904055591Z" level=info msg="StopPodSandbox for \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" returns successfully" Oct 2 20:43:52.904514 env[1383]: time="2023-10-02T20:43:52.904487557Z" level=info msg="RemovePodSandbox for \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\"" Oct 2 20:43:52.904564 env[1383]: time="2023-10-02T20:43:52.904519117Z" level=info msg="Forcibly stopping sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\"" Oct 2 20:43:52.904614 env[1383]: time="2023-10-02T20:43:52.904593158Z" level=info msg="TearDown network for sandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" successfully" Oct 2 20:43:52.914628 env[1383]: time="2023-10-02T20:43:52.914583532Z" level=info msg="RemovePodSandbox \"2fd5cae653e9d8e06356339ad51b4f2be5493e5903813b18364ec87c1eb276be\" returns successfully" Oct 2 20:43:52.915052 env[1383]: time="2023-10-02T20:43:52.915030058Z" level=info msg="StopPodSandbox for \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\"" Oct 2 20:43:52.915229 env[1383]: time="2023-10-02T20:43:52.915193620Z" level=info msg="TearDown network for sandbox \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\" successfully" Oct 2 20:43:52.915301 env[1383]: time="2023-10-02T20:43:52.915285141Z" level=info msg="StopPodSandbox for \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\" returns successfully" Oct 2 20:43:52.915644 env[1383]: time="2023-10-02T20:43:52.915613906Z" level=info msg="RemovePodSandbox for \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\"" Oct 2 20:43:52.915715 env[1383]: time="2023-10-02T20:43:52.915643266Z" level=info msg="Forcibly stopping sandbox \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\"" Oct 2 20:43:52.915743 env[1383]: time="2023-10-02T20:43:52.915719227Z" level=info msg="TearDown network for sandbox \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\" successfully" Oct 2 20:43:52.927621 env[1383]: time="2023-10-02T20:43:52.927578986Z" level=info msg="RemovePodSandbox \"7548a6d6c92ee14457bfa5500f5bc2ee37f08c85791d717f75dea9ad6d2d7bdb\" returns successfully" Oct 2 20:43:53.039158 kubelet[1940]: E1002 20:43:53.039126 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:53.085411 kubelet[1940]: E1002 20:43:53.085385 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:53.122401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f-rootfs.mount: Deactivated successfully. Oct 2 20:43:54.085656 kubelet[1940]: E1002 20:43:54.085628 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:55.086274 kubelet[1940]: E1002 20:43:55.086238 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:55.313006 kubelet[1940]: W1002 20:43:55.312960 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b78d5d_1b1b_466f_87a9_3fe093940a84.slice/cri-containerd-559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f.scope WatchSource:0}: task 559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f not found: not found Oct 2 20:43:56.087085 kubelet[1940]: E1002 20:43:56.087054 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:57.088061 kubelet[1940]: E1002 20:43:57.088032 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:58.040560 kubelet[1940]: E1002 20:43:58.040528 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:43:58.089296 kubelet[1940]: E1002 20:43:58.089276 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:43:59.089419 kubelet[1940]: E1002 20:43:59.089372 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:00.090380 kubelet[1940]: E1002 20:44:00.090337 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:01.091199 kubelet[1940]: E1002 20:44:01.091147 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:02.092321 kubelet[1940]: E1002 20:44:02.092260 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:03.040828 kubelet[1940]: E1002 20:44:03.040798 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:03.093273 kubelet[1940]: E1002 20:44:03.093239 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:04.093893 kubelet[1940]: E1002 20:44:04.093858 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:05.095360 kubelet[1940]: E1002 20:44:05.095327 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:06.095930 kubelet[1940]: E1002 20:44:06.095901 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:07.097213 kubelet[1940]: E1002 20:44:07.097169 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:07.097821 kubelet[1940]: E1002 20:44:07.097785 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84)\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:44:08.041749 kubelet[1940]: E1002 20:44:08.041698 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:08.098109 kubelet[1940]: E1002 20:44:08.098062 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:09.098764 kubelet[1940]: E1002 20:44:09.098721 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:10.099535 kubelet[1940]: E1002 20:44:10.099498 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:11.100528 kubelet[1940]: E1002 20:44:11.100493 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:12.100918 kubelet[1940]: E1002 20:44:12.100853 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:12.888685 kubelet[1940]: E1002 20:44:12.888651 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:13.042242 kubelet[1940]: E1002 20:44:13.042209 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:13.101274 kubelet[1940]: E1002 20:44:13.101253 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:14.102214 kubelet[1940]: E1002 20:44:14.102171 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:15.102689 kubelet[1940]: E1002 20:44:15.102661 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:16.102843 kubelet[1940]: E1002 20:44:16.102799 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:17.103650 kubelet[1940]: E1002 20:44:17.103625 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:18.042772 kubelet[1940]: E1002 20:44:18.042750 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:18.104498 kubelet[1940]: E1002 20:44:18.104479 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:19.099520 env[1383]: time="2023-10-02T20:44:19.099475430Z" level=info msg="CreateContainer within sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:44:19.105129 kubelet[1940]: E1002 20:44:19.105104 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:19.124848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3177562492.mount: Deactivated successfully. Oct 2 20:44:19.129963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545383791.mount: Deactivated successfully. Oct 2 20:44:19.144102 env[1383]: time="2023-10-02T20:44:19.144040816Z" level=info msg="CreateContainer within sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e\"" Oct 2 20:44:19.144870 env[1383]: time="2023-10-02T20:44:19.144843101Z" level=info msg="StartContainer for \"0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e\"" Oct 2 20:44:19.166621 systemd[1]: Started cri-containerd-0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e.scope. Oct 2 20:44:19.180446 systemd[1]: cri-containerd-0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e.scope: Deactivated successfully. Oct 2 20:44:19.219077 env[1383]: time="2023-10-02T20:44:19.219029345Z" level=info msg="shim disconnected" id=0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e Oct 2 20:44:19.219306 env[1383]: time="2023-10-02T20:44:19.219287387Z" level=warning msg="cleaning up after shim disconnected" id=0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e namespace=k8s.io Oct 2 20:44:19.219369 env[1383]: time="2023-10-02T20:44:19.219356387Z" level=info msg="cleaning up dead shim" Oct 2 20:44:19.230966 env[1383]: time="2023-10-02T20:44:19.230929176Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:44:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2924 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:44:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:44:19.231350 env[1383]: time="2023-10-02T20:44:19.231301458Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 20:44:19.232108 env[1383]: time="2023-10-02T20:44:19.232066703Z" level=error msg="Failed to pipe stdout of container \"0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e\"" error="reading from a closed fifo" Oct 2 20:44:19.232218 env[1383]: time="2023-10-02T20:44:19.232190144Z" level=error msg="Failed to pipe stderr of container \"0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e\"" error="reading from a closed fifo" Oct 2 20:44:19.236825 env[1383]: time="2023-10-02T20:44:19.236781651Z" level=error msg="StartContainer for \"0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:44:19.237527 kubelet[1940]: E1002 20:44:19.237071 1940 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e" Oct 2 20:44:19.237527 kubelet[1940]: E1002 20:44:19.237161 1940 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:44:19.237527 kubelet[1940]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:44:19.237527 kubelet[1940]: rm /hostbin/cilium-mount Oct 2 20:44:19.237745 kubelet[1940]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ftsr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:44:19.237802 kubelet[1940]: E1002 20:44:19.237203 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:44:19.496473 kubelet[1940]: I1002 20:44:19.496445 1940 scope.go:115] "RemoveContainer" containerID="559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f" Oct 2 20:44:19.496761 kubelet[1940]: I1002 20:44:19.496737 1940 scope.go:115] "RemoveContainer" containerID="559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f" Oct 2 20:44:19.498225 env[1383]: time="2023-10-02T20:44:19.498195015Z" level=info msg="RemoveContainer for \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\"" Oct 2 20:44:19.498596 env[1383]: time="2023-10-02T20:44:19.498280335Z" level=info msg="RemoveContainer for \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\"" Oct 2 20:44:19.498782 env[1383]: time="2023-10-02T20:44:19.498744218Z" level=error msg="RemoveContainer for \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\" failed" error="failed to set removing state for container \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\": container is already in removing state" Oct 2 20:44:19.498991 kubelet[1940]: E1002 20:44:19.498963 1940 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\": container is already in removing state" containerID="559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f" Oct 2 20:44:19.499063 kubelet[1940]: E1002 20:44:19.499016 1940 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f": container is already in removing state; Skipping pod "cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84)" Oct 2 20:44:19.499292 kubelet[1940]: E1002 20:44:19.499273 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84)\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:44:19.509768 env[1383]: time="2023-10-02T20:44:19.509727924Z" level=info msg="RemoveContainer for \"559f259209100122fbbee0420059a1638b7ac0502886dce7731c0823e1e7cd5f\" returns successfully" Oct 2 20:44:20.106387 kubelet[1940]: E1002 20:44:20.106337 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:20.123181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e-rootfs.mount: Deactivated successfully. Oct 2 20:44:21.107146 kubelet[1940]: E1002 20:44:21.107114 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:22.108039 kubelet[1940]: E1002 20:44:22.108001 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:22.323504 kubelet[1940]: W1002 20:44:22.323476 1940 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b78d5d_1b1b_466f_87a9_3fe093940a84.slice/cri-containerd-0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e.scope WatchSource:0}: task 0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e not found: not found Oct 2 20:44:23.043704 kubelet[1940]: E1002 20:44:23.043667 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:23.108595 kubelet[1940]: E1002 20:44:23.108564 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:24.108681 kubelet[1940]: E1002 20:44:24.108644 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:25.109647 kubelet[1940]: E1002 20:44:25.109616 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:26.110147 kubelet[1940]: E1002 20:44:26.110112 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:27.110914 kubelet[1940]: E1002 20:44:27.110874 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:28.044568 kubelet[1940]: E1002 20:44:28.044532 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:28.111123 kubelet[1940]: E1002 20:44:28.111102 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:29.112126 kubelet[1940]: E1002 20:44:29.112103 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:30.112728 kubelet[1940]: E1002 20:44:30.112692 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:31.113510 kubelet[1940]: E1002 20:44:31.113486 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:32.097913 kubelet[1940]: E1002 20:44:32.097884 1940 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-krz5t_kube-system(32b78d5d-1b1b-466f-87a9-3fe093940a84)\"" pod="kube-system/cilium-krz5t" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 Oct 2 20:44:32.114791 kubelet[1940]: E1002 20:44:32.114764 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:32.888598 kubelet[1940]: E1002 20:44:32.888573 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:33.045877 kubelet[1940]: E1002 20:44:33.045845 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:33.115447 kubelet[1940]: E1002 20:44:33.115417 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:33.797184 env[1383]: time="2023-10-02T20:44:33.797135305Z" level=info msg="StopPodSandbox for \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\"" Oct 2 20:44:33.797580 env[1383]: time="2023-10-02T20:44:33.797555746Z" level=info msg="Container to stop \"0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:44:33.799060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e-shm.mount: Deactivated successfully. Oct 2 20:44:33.807274 systemd[1]: cri-containerd-d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e.scope: Deactivated successfully. Oct 2 20:44:33.813205 kernel: kauditd_printk_skb: 47 callbacks suppressed Oct 2 20:44:33.813309 kernel: audit: type=1334 audit(1696279473.806:745): prog-id=91 op=UNLOAD Oct 2 20:44:33.806000 audit: BPF prog-id=91 op=UNLOAD Oct 2 20:44:33.818262 env[1383]: time="2023-10-02T20:44:33.818223453Z" level=info msg="StopContainer for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" with timeout 30 (s)" Oct 2 20:44:33.818822 env[1383]: time="2023-10-02T20:44:33.818793615Z" level=info msg="Stop container \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" with signal terminated" Oct 2 20:44:33.819000 audit: BPF prog-id=94 op=UNLOAD Oct 2 20:44:33.828102 kernel: audit: type=1334 audit(1696279473.819:746): prog-id=94 op=UNLOAD Oct 2 20:44:33.836000 audit: BPF prog-id=83 op=UNLOAD Oct 2 20:44:33.837216 systemd[1]: cri-containerd-ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2.scope: Deactivated successfully. Oct 2 20:44:33.844025 kernel: audit: type=1334 audit(1696279473.836:747): prog-id=83 op=UNLOAD Oct 2 20:44:33.846000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:44:33.847250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e-rootfs.mount: Deactivated successfully. Oct 2 20:44:33.855018 kernel: audit: type=1334 audit(1696279473.846:748): prog-id=86 op=UNLOAD Oct 2 20:44:33.868406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2-rootfs.mount: Deactivated successfully. Oct 2 20:44:33.899577 env[1383]: time="2023-10-02T20:44:33.899524155Z" level=info msg="shim disconnected" id=d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e Oct 2 20:44:33.899577 env[1383]: time="2023-10-02T20:44:33.899568435Z" level=warning msg="cleaning up after shim disconnected" id=d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e namespace=k8s.io Oct 2 20:44:33.899577 env[1383]: time="2023-10-02T20:44:33.899577675Z" level=info msg="cleaning up dead shim" Oct 2 20:44:33.899888 env[1383]: time="2023-10-02T20:44:33.899855276Z" level=info msg="shim disconnected" id=ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2 Oct 2 20:44:33.899933 env[1383]: time="2023-10-02T20:44:33.899890156Z" level=warning msg="cleaning up after shim disconnected" id=ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2 namespace=k8s.io Oct 2 20:44:33.899933 env[1383]: time="2023-10-02T20:44:33.899901476Z" level=info msg="cleaning up dead shim" Oct 2 20:44:33.912206 env[1383]: time="2023-10-02T20:44:33.912157955Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2975 runtime=io.containerd.runc.v2\n" Oct 2 20:44:33.912460 env[1383]: time="2023-10-02T20:44:33.912426036Z" level=info msg="TearDown network for sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" successfully" Oct 2 20:44:33.912460 env[1383]: time="2023-10-02T20:44:33.912453196Z" level=info msg="StopPodSandbox for \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" returns successfully" Oct 2 20:44:33.915307 env[1383]: time="2023-10-02T20:44:33.915282245Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2976 runtime=io.containerd.runc.v2\n" Oct 2 20:44:33.920599 env[1383]: time="2023-10-02T20:44:33.920569342Z" level=info msg="StopContainer for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" returns successfully" Oct 2 20:44:33.921178 env[1383]: time="2023-10-02T20:44:33.921152944Z" level=info msg="StopPodSandbox for \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\"" Oct 2 20:44:33.921529 env[1383]: time="2023-10-02T20:44:33.921504265Z" level=info msg="Container to stop \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:44:33.922834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64-shm.mount: Deactivated successfully. Oct 2 20:44:33.930000 audit: BPF prog-id=79 op=UNLOAD Oct 2 20:44:33.931668 systemd[1]: cri-containerd-8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64.scope: Deactivated successfully. Oct 2 20:44:33.938024 kernel: audit: type=1334 audit(1696279473.930:749): prog-id=79 op=UNLOAD Oct 2 20:44:33.942000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:44:33.949040 kernel: audit: type=1334 audit(1696279473.942:750): prog-id=82 op=UNLOAD Oct 2 20:44:33.966720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64-rootfs.mount: Deactivated successfully. Oct 2 20:44:33.988321 env[1383]: time="2023-10-02T20:44:33.988279160Z" level=info msg="shim disconnected" id=8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64 Oct 2 20:44:33.988539 env[1383]: time="2023-10-02T20:44:33.988521241Z" level=warning msg="cleaning up after shim disconnected" id=8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64 namespace=k8s.io Oct 2 20:44:33.988626 env[1383]: time="2023-10-02T20:44:33.988611601Z" level=info msg="cleaning up dead shim" Oct 2 20:44:34.000844 env[1383]: time="2023-10-02T20:44:34.000814641Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3021 runtime=io.containerd.runc.v2\n" Oct 2 20:44:34.001289 env[1383]: time="2023-10-02T20:44:34.001263482Z" level=info msg="TearDown network for sandbox \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" successfully" Oct 2 20:44:34.001394 env[1383]: time="2023-10-02T20:44:34.001376482Z" level=info msg="StopPodSandbox for \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" returns successfully" Oct 2 20:44:34.025019 kubelet[1940]: I1002 20:44:34.023536 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32b78d5d-1b1b-466f-87a9-3fe093940a84-hubble-tls\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025019 kubelet[1940]: I1002 20:44:34.023591 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-lib-modules\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025019 kubelet[1940]: I1002 20:44:34.023611 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-etc-cni-netd\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025019 kubelet[1940]: I1002 20:44:34.023633 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-config-path\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025019 kubelet[1940]: I1002 20:44:34.023671 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-hostproc\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025019 kubelet[1940]: I1002 20:44:34.023688 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cni-path\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025248 kubelet[1940]: I1002 20:44:34.023709 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-ipsec-secrets\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025248 kubelet[1940]: I1002 20:44:34.023739 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32b78d5d-1b1b-466f-87a9-3fe093940a84-clustermesh-secrets\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025248 kubelet[1940]: I1002 20:44:34.023759 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-host-proc-sys-kernel\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025248 kubelet[1940]: I1002 20:44:34.023780 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-xtables-lock\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025248 kubelet[1940]: I1002 20:44:34.023798 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-run\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025248 kubelet[1940]: I1002 20:44:34.023825 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-bpf-maps\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025385 kubelet[1940]: I1002 20:44:34.023842 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-host-proc-sys-net\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025385 kubelet[1940]: I1002 20:44:34.023861 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-cgroup\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025385 kubelet[1940]: I1002 20:44:34.023889 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftsr8\" (UniqueName: \"kubernetes.io/projected/32b78d5d-1b1b-466f-87a9-3fe093940a84-kube-api-access-ftsr8\") pod \"32b78d5d-1b1b-466f-87a9-3fe093940a84\" (UID: \"32b78d5d-1b1b-466f-87a9-3fe093940a84\") " Oct 2 20:44:34.025385 kubelet[1940]: I1002 20:44:34.024317 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.025385 kubelet[1940]: I1002 20:44:34.024344 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.025385 kubelet[1940]: W1002 20:44:34.024440 1940 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/32b78d5d-1b1b-466f-87a9-3fe093940a84/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:44:34.026871 kubelet[1940]: I1002 20:44:34.026849 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:44:34.027031 kubelet[1940]: I1002 20:44:34.027016 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-hostproc" (OuterVolumeSpecName: "hostproc") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.027133 kubelet[1940]: I1002 20:44:34.027120 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cni-path" (OuterVolumeSpecName: "cni-path") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.027404 kubelet[1940]: I1002 20:44:34.027388 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.027529 kubelet[1940]: I1002 20:44:34.027514 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.027642 kubelet[1940]: I1002 20:44:34.027615 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.027748 kubelet[1940]: I1002 20:44:34.027734 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.027839 kubelet[1940]: I1002 20:44:34.027827 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.027950 kubelet[1940]: I1002 20:44:34.027937 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:44:34.028483 kubelet[1940]: I1002 20:44:34.028451 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b78d5d-1b1b-466f-87a9-3fe093940a84-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:44:34.030796 kubelet[1940]: I1002 20:44:34.030774 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b78d5d-1b1b-466f-87a9-3fe093940a84-kube-api-access-ftsr8" (OuterVolumeSpecName: "kube-api-access-ftsr8") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "kube-api-access-ftsr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:44:34.032751 kubelet[1940]: I1002 20:44:34.032730 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b78d5d-1b1b-466f-87a9-3fe093940a84-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:44:34.033914 kubelet[1940]: I1002 20:44:34.033883 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "32b78d5d-1b1b-466f-87a9-3fe093940a84" (UID: "32b78d5d-1b1b-466f-87a9-3fe093940a84"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:44:34.116325 kubelet[1940]: E1002 20:44:34.116235 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:34.124574 kubelet[1940]: I1002 20:44:34.124549 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnvjp\" (UniqueName: \"kubernetes.io/projected/814a94cd-f402-4a34-9fed-5a7e6df702f4-kube-api-access-mnvjp\") pod \"814a94cd-f402-4a34-9fed-5a7e6df702f4\" (UID: \"814a94cd-f402-4a34-9fed-5a7e6df702f4\") " Oct 2 20:44:34.124643 kubelet[1940]: I1002 20:44:34.124592 1940 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/814a94cd-f402-4a34-9fed-5a7e6df702f4-cilium-config-path\") pod \"814a94cd-f402-4a34-9fed-5a7e6df702f4\" (UID: \"814a94cd-f402-4a34-9fed-5a7e6df702f4\") " Oct 2 20:44:34.124643 kubelet[1940]: I1002 20:44:34.124616 1940 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32b78d5d-1b1b-466f-87a9-3fe093940a84-hubble-tls\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124643 kubelet[1940]: I1002 20:44:34.124627 1940 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-lib-modules\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124643 kubelet[1940]: I1002 20:44:34.124637 1940 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-etc-cni-netd\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124643 kubelet[1940]: I1002 20:44:34.124647 1940 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-config-path\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124779 kubelet[1940]: I1002 20:44:34.124656 1940 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-hostproc\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124779 kubelet[1940]: I1002 20:44:34.124666 1940 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cni-path\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124779 kubelet[1940]: I1002 20:44:34.124676 1940 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-ipsec-secrets\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124779 kubelet[1940]: I1002 20:44:34.124685 1940 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32b78d5d-1b1b-466f-87a9-3fe093940a84-clustermesh-secrets\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124779 kubelet[1940]: I1002 20:44:34.124695 1940 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-host-proc-sys-kernel\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124779 kubelet[1940]: I1002 20:44:34.124704 1940 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-xtables-lock\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124779 kubelet[1940]: I1002 20:44:34.124713 1940 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-run\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124779 kubelet[1940]: I1002 20:44:34.124722 1940 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-bpf-maps\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124957 kubelet[1940]: I1002 20:44:34.124732 1940 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-host-proc-sys-net\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124957 kubelet[1940]: I1002 20:44:34.124741 1940 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32b78d5d-1b1b-466f-87a9-3fe093940a84-cilium-cgroup\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124957 kubelet[1940]: I1002 20:44:34.124750 1940 reconciler.go:399] "Volume detached for volume \"kube-api-access-ftsr8\" (UniqueName: \"kubernetes.io/projected/32b78d5d-1b1b-466f-87a9-3fe093940a84-kube-api-access-ftsr8\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.124957 kubelet[1940]: W1002 20:44:34.124902 1940 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/814a94cd-f402-4a34-9fed-5a7e6df702f4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:44:34.126584 kubelet[1940]: I1002 20:44:34.126539 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814a94cd-f402-4a34-9fed-5a7e6df702f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "814a94cd-f402-4a34-9fed-5a7e6df702f4" (UID: "814a94cd-f402-4a34-9fed-5a7e6df702f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:44:34.130141 kubelet[1940]: I1002 20:44:34.130115 1940 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814a94cd-f402-4a34-9fed-5a7e6df702f4-kube-api-access-mnvjp" (OuterVolumeSpecName: "kube-api-access-mnvjp") pod "814a94cd-f402-4a34-9fed-5a7e6df702f4" (UID: "814a94cd-f402-4a34-9fed-5a7e6df702f4"). InnerVolumeSpecName "kube-api-access-mnvjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:44:34.225362 kubelet[1940]: I1002 20:44:34.225340 1940 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/814a94cd-f402-4a34-9fed-5a7e6df702f4-cilium-config-path\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.225491 kubelet[1940]: I1002 20:44:34.225479 1940 reconciler.go:399] "Volume detached for volume \"kube-api-access-mnvjp\" (UniqueName: \"kubernetes.io/projected/814a94cd-f402-4a34-9fed-5a7e6df702f4-kube-api-access-mnvjp\") on node \"10.200.20.44\" DevicePath \"\"" Oct 2 20:44:34.518496 kubelet[1940]: I1002 20:44:34.518474 1940 scope.go:115] "RemoveContainer" containerID="ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2" Oct 2 20:44:34.519498 env[1383]: time="2023-10-02T20:44:34.519450659Z" level=info msg="RemoveContainer for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\"" Oct 2 20:44:34.524967 systemd[1]: Removed slice kubepods-besteffort-pod814a94cd_f402_4a34_9fed_5a7e6df702f4.slice. Oct 2 20:44:34.529203 systemd[1]: Removed slice kubepods-burstable-pod32b78d5d_1b1b_466f_87a9_3fe093940a84.slice. Oct 2 20:44:34.533899 env[1383]: time="2023-10-02T20:44:34.533857943Z" level=info msg="RemoveContainer for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" returns successfully" Oct 2 20:44:34.534785 kubelet[1940]: I1002 20:44:34.534759 1940 scope.go:115] "RemoveContainer" containerID="ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2" Oct 2 20:44:34.535056 env[1383]: time="2023-10-02T20:44:34.534954866Z" level=error msg="ContainerStatus for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\": not found" Oct 2 20:44:34.535186 kubelet[1940]: E1002 20:44:34.535166 1940 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\": not found" containerID="ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2" Oct 2 20:44:34.535228 kubelet[1940]: I1002 20:44:34.535204 1940 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2} err="failed to get container status \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\": not found" Oct 2 20:44:34.535228 kubelet[1940]: I1002 20:44:34.535216 1940 scope.go:115] "RemoveContainer" containerID="0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e" Oct 2 20:44:34.538336 env[1383]: time="2023-10-02T20:44:34.538103756Z" level=info msg="RemoveContainer for \"0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e\"" Oct 2 20:44:34.546577 env[1383]: time="2023-10-02T20:44:34.546499941Z" level=info msg="RemoveContainer for \"0c5e2ffc539e279d9f8c84c145d348991d5d2b567cff19f21cc855826cb7617e\" returns successfully" Oct 2 20:44:34.799059 systemd[1]: var-lib-kubelet-pods-32b78d5d\x2d1b1b\x2d466f\x2d87a9\x2d3fe093940a84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dftsr8.mount: Deactivated successfully. Oct 2 20:44:34.799151 systemd[1]: var-lib-kubelet-pods-32b78d5d\x2d1b1b\x2d466f\x2d87a9\x2d3fe093940a84-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:44:34.799208 systemd[1]: var-lib-kubelet-pods-32b78d5d\x2d1b1b\x2d466f\x2d87a9\x2d3fe093940a84-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:44:34.799258 systemd[1]: var-lib-kubelet-pods-32b78d5d\x2d1b1b\x2d466f\x2d87a9\x2d3fe093940a84-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:44:34.799303 systemd[1]: var-lib-kubelet-pods-814a94cd\x2df402\x2d4a34\x2d9fed\x2d5a7e6df702f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmnvjp.mount: Deactivated successfully. Oct 2 20:44:35.099057 env[1383]: time="2023-10-02T20:44:35.098918086Z" level=info msg="StopPodSandbox for \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\"" Oct 2 20:44:35.099324 env[1383]: time="2023-10-02T20:44:35.099115486Z" level=info msg="TearDown network for sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" successfully" Oct 2 20:44:35.099324 env[1383]: time="2023-10-02T20:44:35.099154886Z" level=info msg="StopPodSandbox for \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" returns successfully" Oct 2 20:44:35.099324 env[1383]: time="2023-10-02T20:44:35.099309167Z" level=info msg="StopContainer for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" with timeout 1 (s)" Oct 2 20:44:35.099410 env[1383]: time="2023-10-02T20:44:35.099334207Z" level=error msg="StopContainer for \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\": not found" Oct 2 20:44:35.099770 kubelet[1940]: E1002 20:44:35.099537 1940 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2\": not found" containerID="ce75de49233fbbdac0639435d8e949c78ea32d8679ef0aa4c23873f3813469c2" Oct 2 20:44:35.099896 env[1383]: time="2023-10-02T20:44:35.099805648Z" level=info msg="StopPodSandbox for \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\"" Oct 2 20:44:35.099896 env[1383]: time="2023-10-02T20:44:35.099866288Z" level=info msg="TearDown network for sandbox \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" successfully" Oct 2 20:44:35.099896 env[1383]: time="2023-10-02T20:44:35.099891368Z" level=info msg="StopPodSandbox for \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" returns successfully" Oct 2 20:44:35.100727 kubelet[1940]: I1002 20:44:35.100623 1940 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=32b78d5d-1b1b-466f-87a9-3fe093940a84 path="/var/lib/kubelet/pods/32b78d5d-1b1b-466f-87a9-3fe093940a84/volumes" Oct 2 20:44:35.101418 kubelet[1940]: I1002 20:44:35.101403 1940 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=814a94cd-f402-4a34-9fed-5a7e6df702f4 path="/var/lib/kubelet/pods/814a94cd-f402-4a34-9fed-5a7e6df702f4/volumes" Oct 2 20:44:35.117494 kubelet[1940]: E1002 20:44:35.117479 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:36.117965 kubelet[1940]: E1002 20:44:36.117926 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:37.118916 kubelet[1940]: E1002 20:44:37.118886 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:38.047161 kubelet[1940]: E1002 20:44:38.047138 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:38.119457 kubelet[1940]: E1002 20:44:38.119442 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:39.120621 kubelet[1940]: E1002 20:44:39.120590 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:40.121282 kubelet[1940]: E1002 20:44:40.121248 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:41.122082 kubelet[1940]: E1002 20:44:41.122057 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:42.122878 kubelet[1940]: E1002 20:44:42.122833 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:43.048304 kubelet[1940]: E1002 20:44:43.048218 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:43.123312 kubelet[1940]: E1002 20:44:43.123287 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:44.123824 kubelet[1940]: E1002 20:44:44.123790 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:45.124394 kubelet[1940]: E1002 20:44:45.124369 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:46.125705 kubelet[1940]: E1002 20:44:46.125675 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:47.127123 kubelet[1940]: E1002 20:44:47.127097 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:48.049077 kubelet[1940]: E1002 20:44:48.049046 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:48.128175 kubelet[1940]: E1002 20:44:48.128145 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:49.129082 kubelet[1940]: E1002 20:44:49.129050 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:50.130082 kubelet[1940]: E1002 20:44:50.130044 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:51.131448 kubelet[1940]: E1002 20:44:51.131417 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:51.443063 kubelet[1940]: E1002 20:44:51.443030 1940 controller.go:187] failed to update lease, error: Put "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.44?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 20:44:51.619747 kubelet[1940]: E1002 20:44:51.619712 1940 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"10.200.20.44\": Get \"https://10.200.20.40:6443/api/v1/nodes/10.200.20.44?resourceVersion=0&timeout=10s\": context deadline exceeded" Oct 2 20:44:52.132097 kubelet[1940]: E1002 20:44:52.132066 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:52.888696 kubelet[1940]: E1002 20:44:52.888669 1940 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:52.923576 kubelet[1940]: W1002 20:44:52.923559 1940 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 20:44:52.930161 env[1383]: time="2023-10-02T20:44:52.929960763Z" level=info msg="StopPodSandbox for \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\"" Oct 2 20:44:52.930161 env[1383]: time="2023-10-02T20:44:52.930084243Z" level=info msg="TearDown network for sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" successfully" Oct 2 20:44:52.930161 env[1383]: time="2023-10-02T20:44:52.930115923Z" level=info msg="StopPodSandbox for \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" returns successfully" Oct 2 20:44:52.930709 env[1383]: time="2023-10-02T20:44:52.930681484Z" level=info msg="RemovePodSandbox for \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\"" Oct 2 20:44:52.930806 env[1383]: time="2023-10-02T20:44:52.930713164Z" level=info msg="Forcibly stopping sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\"" Oct 2 20:44:52.930806 env[1383]: time="2023-10-02T20:44:52.930787004Z" level=info msg="TearDown network for sandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" successfully" Oct 2 20:44:52.937878 env[1383]: time="2023-10-02T20:44:52.937840206Z" level=info msg="RemovePodSandbox \"d64bcbea672ffeb3a54b5cbfc2aeaa780fd3a2a65c17548c1d5b7c77d1a7555e\" returns successfully" Oct 2 20:44:52.938318 env[1383]: time="2023-10-02T20:44:52.938179686Z" level=info msg="StopPodSandbox for \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\"" Oct 2 20:44:52.938318 env[1383]: time="2023-10-02T20:44:52.938241846Z" level=info msg="TearDown network for sandbox \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" successfully" Oct 2 20:44:52.938318 env[1383]: time="2023-10-02T20:44:52.938267326Z" level=info msg="StopPodSandbox for \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" returns successfully" Oct 2 20:44:52.939233 env[1383]: time="2023-10-02T20:44:52.938596686Z" level=info msg="RemovePodSandbox for \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\"" Oct 2 20:44:52.939233 env[1383]: time="2023-10-02T20:44:52.938619246Z" level=info msg="Forcibly stopping sandbox \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\"" Oct 2 20:44:52.939233 env[1383]: time="2023-10-02T20:44:52.938668686Z" level=info msg="TearDown network for sandbox \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" successfully" Oct 2 20:44:52.948032 env[1383]: time="2023-10-02T20:44:52.947976329Z" level=info msg="RemovePodSandbox \"8d9cbc21177b93db87072dceedb39fbd68f740e21de060e1a221d4f94681cb64\" returns successfully" Oct 2 20:44:53.049619 kubelet[1940]: E1002 20:44:53.049566 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:53.133399 kubelet[1940]: E1002 20:44:53.133372 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:54.134371 kubelet[1940]: E1002 20:44:54.134335 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:55.134976 kubelet[1940]: E1002 20:44:55.134951 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:56.135719 kubelet[1940]: E1002 20:44:56.135669 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:57.136714 kubelet[1940]: E1002 20:44:57.136689 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:58.050338 kubelet[1940]: E1002 20:44:58.050316 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:44:58.137310 kubelet[1940]: E1002 20:44:58.137291 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:44:59.138065 kubelet[1940]: E1002 20:44:59.138039 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:00.138554 kubelet[1940]: E1002 20:45:00.138512 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:01.139645 kubelet[1940]: E1002 20:45:01.139619 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:01.443883 kubelet[1940]: E1002 20:45:01.443582 1940 controller.go:187] failed to update lease, error: Put "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.20.44?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Oct 2 20:45:01.620239 kubelet[1940]: E1002 20:45:01.620208 1940 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"10.200.20.44\": Get \"https://10.200.20.40:6443/api/v1/nodes/10.200.20.44?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 2 20:45:02.140470 kubelet[1940]: E1002 20:45:02.140439 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:03.051501 kubelet[1940]: E1002 20:45:03.051452 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:03.141670 kubelet[1940]: E1002 20:45:03.141642 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:04.142643 kubelet[1940]: E1002 20:45:04.142610 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:05.143076 kubelet[1940]: E1002 20:45:05.143047 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:06.143538 kubelet[1940]: E1002 20:45:06.143506 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:07.144081 kubelet[1940]: E1002 20:45:07.144058 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:08.052686 kubelet[1940]: E1002 20:45:08.052663 1940 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:08.144650 kubelet[1940]: E1002 20:45:08.144631 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:09.145196 kubelet[1940]: E1002 20:45:09.145172 1940 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:09.722315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#142 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.722606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#143 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.729695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.811681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#146 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.811925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#150 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812187 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#144 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#149 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.812891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.819008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.826178 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.848868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#142 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.849043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#143 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.855950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#147 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.863126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#146 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.870171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#145 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.877157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#150 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.884164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#144 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.891405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#149 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.898715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.905769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.913257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.920946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.928438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.935376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.942434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#148 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.949751 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.957118 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.964281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.971352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Oct 2 20:45:09.978586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001