Feb 9 18:35:14.758615 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:35:14.758635 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:35:14.758643 kernel: efi: EFI v2.70 by EDK II Feb 9 18:35:14.758649 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Feb 9 18:35:14.758654 kernel: random: crng init done Feb 9 18:35:14.758659 kernel: ACPI: Early table checksum verification disabled Feb 9 18:35:14.758666 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Feb 9 18:35:14.758672 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 9 18:35:14.758678 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758684 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758689 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758694 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758700 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758705 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758713 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758719 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758725 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 18:35:14.758731 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 9 18:35:14.758737 kernel: NUMA: Failed to initialise from firmware Feb 9 18:35:14.758743 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:35:14.758749 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Feb 9 18:35:14.758754 kernel: Zone ranges: Feb 9 18:35:14.758760 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:35:14.758767 kernel: DMA32 empty Feb 9 18:35:14.758773 kernel: Normal empty Feb 9 18:35:14.758778 kernel: Movable zone start for each node Feb 9 18:35:14.758784 kernel: Early memory node ranges Feb 9 18:35:14.758790 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Feb 9 18:35:14.758796 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Feb 9 18:35:14.758801 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Feb 9 18:35:14.758807 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Feb 9 18:35:14.758813 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Feb 9 18:35:14.758819 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Feb 9 18:35:14.758824 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Feb 9 18:35:14.758830 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 9 18:35:14.758837 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 9 18:35:14.758843 kernel: psci: probing for conduit method from ACPI. Feb 9 18:35:14.758849 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:35:14.758870 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:35:14.758877 kernel: psci: Trusted OS migration not required Feb 9 18:35:14.758886 kernel: psci: SMC Calling Convention v1.1 Feb 9 18:35:14.758892 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 9 18:35:14.758900 kernel: ACPI: SRAT not present Feb 9 18:35:14.758906 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:35:14.758912 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:35:14.758919 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 9 18:35:14.758925 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:35:14.758931 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:35:14.758937 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:35:14.758943 kernel: CPU features: detected: Spectre-v4 Feb 9 18:35:14.758949 kernel: CPU features: detected: Spectre-BHB Feb 9 18:35:14.758956 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:35:14.758963 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:35:14.758969 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:35:14.758975 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 9 18:35:14.758981 kernel: Policy zone: DMA Feb 9 18:35:14.758989 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:35:14.758995 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:35:14.759001 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:35:14.759008 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:35:14.759014 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:35:14.759020 kernel: Memory: 2459144K/2572288K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 113144K reserved, 0K cma-reserved) Feb 9 18:35:14.759028 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 18:35:14.759034 kernel: trace event string verifier disabled Feb 9 18:35:14.759040 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:35:14.759047 kernel: rcu: RCU event tracing is enabled. Feb 9 18:35:14.759053 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 18:35:14.759060 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:35:14.759066 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:35:14.759072 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:35:14.759078 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 18:35:14.759085 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:35:14.759091 kernel: GICv3: 256 SPIs implemented Feb 9 18:35:14.759098 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:35:14.759104 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:35:14.759110 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:35:14.759116 kernel: GICv3: 16 PPIs implemented Feb 9 18:35:14.759122 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 9 18:35:14.759128 kernel: ACPI: SRAT not present Feb 9 18:35:14.759134 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 9 18:35:14.759141 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 18:35:14.759147 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Feb 9 18:35:14.759153 kernel: GICv3: using LPI property table @0x00000000400d0000 Feb 9 18:35:14.759159 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Feb 9 18:35:14.759166 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:35:14.759173 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:35:14.759179 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:35:14.759185 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:35:14.759191 kernel: arm-pv: using stolen time PV Feb 9 18:35:14.759198 kernel: Console: colour dummy device 80x25 Feb 9 18:35:14.759204 kernel: ACPI: Core revision 20210730 Feb 9 18:35:14.759211 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:35:14.759217 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:35:14.759223 kernel: LSM: Security Framework initializing Feb 9 18:35:14.759230 kernel: SELinux: Initializing. Feb 9 18:35:14.759237 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:35:14.759243 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:35:14.759250 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:35:14.759256 kernel: Platform MSI: ITS@0x8080000 domain created Feb 9 18:35:14.759262 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 9 18:35:14.759269 kernel: Remapping and enabling EFI services. Feb 9 18:35:14.759275 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:35:14.759281 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:35:14.759287 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 9 18:35:14.759295 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Feb 9 18:35:14.759301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:35:14.759308 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:35:14.759314 kernel: Detected PIPT I-cache on CPU2 Feb 9 18:35:14.759321 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 9 18:35:14.759327 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Feb 9 18:35:14.759334 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:35:14.759340 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 9 18:35:14.759346 kernel: Detected PIPT I-cache on CPU3 Feb 9 18:35:14.759353 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 9 18:35:14.759360 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Feb 9 18:35:14.759366 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:35:14.759373 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 9 18:35:14.759379 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 18:35:14.759389 kernel: SMP: Total of 4 processors activated. Feb 9 18:35:14.759397 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:35:14.759404 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:35:14.759411 kernel: CPU features: detected: Common not Private translations Feb 9 18:35:14.759417 kernel: CPU features: detected: CRC32 instructions Feb 9 18:35:14.759424 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:35:14.759430 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:35:14.759437 kernel: CPU features: detected: Privileged Access Never Feb 9 18:35:14.759445 kernel: CPU features: detected: RAS Extension Support Feb 9 18:35:14.759452 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 9 18:35:14.759458 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:35:14.759465 kernel: alternatives: patching kernel code Feb 9 18:35:14.759473 kernel: devtmpfs: initialized Feb 9 18:35:14.759480 kernel: KASLR enabled Feb 9 18:35:14.759487 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:35:14.759493 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 18:35:14.759500 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:35:14.759506 kernel: SMBIOS 3.0.0 present. Feb 9 18:35:14.759513 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Feb 9 18:35:14.759520 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:35:14.759526 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:35:14.759533 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:35:14.759541 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:35:14.759548 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:35:14.759555 kernel: audit: type=2000 audit(0.043:1): state=initialized audit_enabled=0 res=1 Feb 9 18:35:14.759562 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:35:14.759568 kernel: cpuidle: using governor menu Feb 9 18:35:14.759575 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:35:14.759582 kernel: ASID allocator initialised with 32768 entries Feb 9 18:35:14.759588 kernel: ACPI: bus type PCI registered Feb 9 18:35:14.759595 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:35:14.759603 kernel: Serial: AMBA PL011 UART driver Feb 9 18:35:14.759609 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:35:14.759616 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:35:14.759623 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:35:14.759629 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:35:14.759636 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:35:14.759643 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:35:14.759649 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:35:14.759656 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:35:14.759664 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:35:14.759671 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:35:14.759677 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:35:14.759684 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:35:14.759690 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:35:14.759697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:35:14.759704 kernel: ACPI: Interpreter enabled Feb 9 18:35:14.759710 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:35:14.759717 kernel: ACPI: MCFG table detected, 1 entries Feb 9 18:35:14.759725 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:35:14.759731 kernel: printk: console [ttyAMA0] enabled Feb 9 18:35:14.759738 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 18:35:14.759898 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 18:35:14.759969 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 18:35:14.760027 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 18:35:14.760085 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 9 18:35:14.760145 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 9 18:35:14.760153 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 9 18:35:14.760160 kernel: PCI host bridge to bus 0000:00 Feb 9 18:35:14.760227 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 9 18:35:14.760282 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 18:35:14.760334 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 9 18:35:14.760385 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 18:35:14.760458 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 9 18:35:14.760526 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 18:35:14.760586 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 9 18:35:14.760646 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 9 18:35:14.760705 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:35:14.760775 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 9 18:35:14.760839 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 9 18:35:14.760926 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 9 18:35:14.760980 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 9 18:35:14.761036 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 18:35:14.761089 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 9 18:35:14.761097 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 18:35:14.761104 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 18:35:14.761111 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 18:35:14.761119 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 18:35:14.761126 kernel: iommu: Default domain type: Translated Feb 9 18:35:14.761133 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:35:14.761139 kernel: vgaarb: loaded Feb 9 18:35:14.761146 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:35:14.761152 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:35:14.761159 kernel: PTP clock support registered Feb 9 18:35:14.761165 kernel: Registered efivars operations Feb 9 18:35:14.761172 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:35:14.761178 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:35:14.761186 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:35:14.761193 kernel: pnp: PnP ACPI init Feb 9 18:35:14.761257 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 9 18:35:14.761266 kernel: pnp: PnP ACPI: found 1 devices Feb 9 18:35:14.761273 kernel: NET: Registered PF_INET protocol family Feb 9 18:35:14.761279 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:35:14.761286 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:35:14.761293 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:35:14.761301 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:35:14.761308 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:35:14.761314 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:35:14.761321 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:35:14.761328 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:35:14.761334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:35:14.761341 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:35:14.761347 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 9 18:35:14.761355 kernel: kvm [1]: HYP mode not available Feb 9 18:35:14.761361 kernel: Initialise system trusted keyrings Feb 9 18:35:14.761368 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:35:14.761374 kernel: Key type asymmetric registered Feb 9 18:35:14.761381 kernel: Asymmetric key parser 'x509' registered Feb 9 18:35:14.761388 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:35:14.761394 kernel: io scheduler mq-deadline registered Feb 9 18:35:14.761401 kernel: io scheduler kyber registered Feb 9 18:35:14.761407 kernel: io scheduler bfq registered Feb 9 18:35:14.761414 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 18:35:14.761421 kernel: ACPI: button: Power Button [PWRB] Feb 9 18:35:14.761428 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 18:35:14.761487 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 9 18:35:14.761496 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:35:14.761502 kernel: thunder_xcv, ver 1.0 Feb 9 18:35:14.761509 kernel: thunder_bgx, ver 1.0 Feb 9 18:35:14.761515 kernel: nicpf, ver 1.0 Feb 9 18:35:14.761522 kernel: nicvf, ver 1.0 Feb 9 18:35:14.761594 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:35:14.761652 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:35:14 UTC (1707503714) Feb 9 18:35:14.761661 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:35:14.761668 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:35:14.761674 kernel: Segment Routing with IPv6 Feb 9 18:35:14.761681 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:35:14.761687 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:35:14.761694 kernel: Key type dns_resolver registered Feb 9 18:35:14.761700 kernel: registered taskstats version 1 Feb 9 18:35:14.761708 kernel: Loading compiled-in X.509 certificates Feb 9 18:35:14.761715 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:35:14.761721 kernel: Key type .fscrypt registered Feb 9 18:35:14.761728 kernel: Key type fscrypt-provisioning registered Feb 9 18:35:14.761734 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:35:14.761741 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:35:14.761747 kernel: ima: No architecture policies found Feb 9 18:35:14.761754 kernel: Freeing unused kernel memory: 34688K Feb 9 18:35:14.761760 kernel: Run /init as init process Feb 9 18:35:14.761767 kernel: with arguments: Feb 9 18:35:14.761774 kernel: /init Feb 9 18:35:14.761780 kernel: with environment: Feb 9 18:35:14.761786 kernel: HOME=/ Feb 9 18:35:14.761793 kernel: TERM=linux Feb 9 18:35:14.761799 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:35:14.761807 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:35:14.761816 systemd[1]: Detected virtualization kvm. Feb 9 18:35:14.761824 systemd[1]: Detected architecture arm64. Feb 9 18:35:14.761831 systemd[1]: Running in initrd. Feb 9 18:35:14.761838 systemd[1]: No hostname configured, using default hostname. Feb 9 18:35:14.761844 systemd[1]: Hostname set to . Feb 9 18:35:14.761852 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:35:14.761873 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:35:14.761896 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:35:14.761904 systemd[1]: Reached target cryptsetup.target. Feb 9 18:35:14.761913 systemd[1]: Reached target paths.target. Feb 9 18:35:14.761920 systemd[1]: Reached target slices.target. Feb 9 18:35:14.761927 systemd[1]: Reached target swap.target. Feb 9 18:35:14.761934 systemd[1]: Reached target timers.target. Feb 9 18:35:14.761942 systemd[1]: Listening on iscsid.socket. Feb 9 18:35:14.761949 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:35:14.761956 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:35:14.761965 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:35:14.761972 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:35:14.761979 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:35:14.761986 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:35:14.761993 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:35:14.762000 systemd[1]: Reached target sockets.target. Feb 9 18:35:14.762007 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:35:14.762014 systemd[1]: Finished network-cleanup.service. Feb 9 18:35:14.762021 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:35:14.762029 systemd[1]: Starting systemd-journald.service... Feb 9 18:35:14.762036 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:35:14.762043 systemd[1]: Starting systemd-resolved.service... Feb 9 18:35:14.762051 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:35:14.762058 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:35:14.762065 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:35:14.762072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:35:14.762079 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:35:14.762086 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:35:14.762094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:35:14.762104 systemd-journald[289]: Journal started Feb 9 18:35:14.762144 systemd-journald[289]: Runtime Journal (/run/log/journal/90f730c2a060472696928f232c3487ed) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:35:14.737143 systemd-modules-load[290]: Inserted module 'overlay' Feb 9 18:35:14.764164 systemd[1]: Started systemd-journald.service. Feb 9 18:35:14.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.766876 kernel: audit: type=1130 audit(1707503714.764:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.774727 systemd-resolved[291]: Positive Trust Anchors: Feb 9 18:35:14.774745 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:35:14.776905 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:35:14.774773 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:35:14.783181 kernel: Bridge firewalling registered Feb 9 18:35:14.783197 kernel: audit: type=1130 audit(1707503714.782:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.778929 systemd-resolved[291]: Defaulting to hostname 'linux'. Feb 9 18:35:14.781877 systemd[1]: Started systemd-resolved.service. Feb 9 18:35:14.782324 systemd-modules-load[290]: Inserted module 'br_netfilter' Feb 9 18:35:14.793988 kernel: audit: type=1130 audit(1707503714.790:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.783124 systemd[1]: Reached target nss-lookup.target. Feb 9 18:35:14.789732 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:35:14.791243 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:35:14.797119 kernel: SCSI subsystem initialized Feb 9 18:35:14.800268 dracut-cmdline[308]: dracut-dracut-053 Feb 9 18:35:14.802419 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:35:14.808427 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:35:14.808444 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:35:14.808453 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:35:14.810241 systemd-modules-load[290]: Inserted module 'dm_multipath' Feb 9 18:35:14.810979 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:35:14.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.814831 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:35:14.816013 kernel: audit: type=1130 audit(1707503714.810:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.821735 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:35:14.824903 kernel: audit: type=1130 audit(1707503714.821:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.863888 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:35:14.871893 kernel: iscsi: registered transport (tcp) Feb 9 18:35:14.886883 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:35:14.886905 kernel: QLogic iSCSI HBA Driver Feb 9 18:35:14.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.921049 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:35:14.924418 kernel: audit: type=1130 audit(1707503714.920:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:14.922491 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:35:14.965887 kernel: raid6: neonx8 gen() 13715 MB/s Feb 9 18:35:14.982885 kernel: raid6: neonx8 xor() 10820 MB/s Feb 9 18:35:14.999885 kernel: raid6: neonx4 gen() 13538 MB/s Feb 9 18:35:15.016887 kernel: raid6: neonx4 xor() 11271 MB/s Feb 9 18:35:15.033892 kernel: raid6: neonx2 gen() 12887 MB/s Feb 9 18:35:15.050889 kernel: raid6: neonx2 xor() 10221 MB/s Feb 9 18:35:15.067880 kernel: raid6: neonx1 gen() 10439 MB/s Feb 9 18:35:15.084880 kernel: raid6: neonx1 xor() 8762 MB/s Feb 9 18:35:15.101894 kernel: raid6: int64x8 gen() 6276 MB/s Feb 9 18:35:15.118879 kernel: raid6: int64x8 xor() 3547 MB/s Feb 9 18:35:15.135879 kernel: raid6: int64x4 gen() 7233 MB/s Feb 9 18:35:15.152877 kernel: raid6: int64x4 xor() 3851 MB/s Feb 9 18:35:15.169877 kernel: raid6: int64x2 gen() 6145 MB/s Feb 9 18:35:15.186878 kernel: raid6: int64x2 xor() 3321 MB/s Feb 9 18:35:15.203879 kernel: raid6: int64x1 gen() 5031 MB/s Feb 9 18:35:15.221073 kernel: raid6: int64x1 xor() 2638 MB/s Feb 9 18:35:15.221086 kernel: raid6: using algorithm neonx8 gen() 13715 MB/s Feb 9 18:35:15.221094 kernel: raid6: .... xor() 10820 MB/s, rmw enabled Feb 9 18:35:15.221102 kernel: raid6: using neon recovery algorithm Feb 9 18:35:15.232110 kernel: xor: measuring software checksum speed Feb 9 18:35:15.232131 kernel: 8regs : 17319 MB/sec Feb 9 18:35:15.232953 kernel: 32regs : 20755 MB/sec Feb 9 18:35:15.234120 kernel: arm64_neon : 27462 MB/sec Feb 9 18:35:15.234131 kernel: xor: using function: arm64_neon (27462 MB/sec) Feb 9 18:35:15.290887 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:35:15.301138 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:35:15.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:15.302837 systemd[1]: Starting systemd-udevd.service... Feb 9 18:35:15.306487 kernel: audit: type=1130 audit(1707503715.301:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:15.306507 kernel: audit: type=1334 audit(1707503715.301:9): prog-id=7 op=LOAD Feb 9 18:35:15.306523 kernel: audit: type=1334 audit(1707503715.301:10): prog-id=8 op=LOAD Feb 9 18:35:15.301000 audit: BPF prog-id=7 op=LOAD Feb 9 18:35:15.301000 audit: BPF prog-id=8 op=LOAD Feb 9 18:35:15.319720 systemd-udevd[492]: Using default interface naming scheme 'v252'. Feb 9 18:35:15.323076 systemd[1]: Started systemd-udevd.service. Feb 9 18:35:15.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:15.325137 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:35:15.336921 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Feb 9 18:35:15.363673 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:35:15.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:15.365300 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:35:15.398280 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:35:15.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:15.422944 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 18:35:15.425468 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 18:35:15.425497 kernel: GPT:9289727 != 19775487 Feb 9 18:35:15.425506 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 18:35:15.425515 kernel: GPT:9289727 != 19775487 Feb 9 18:35:15.425954 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 18:35:15.426879 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:35:15.439897 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (550) Feb 9 18:35:15.441671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:35:15.442456 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:35:15.450788 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:35:15.453975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:35:15.457058 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:35:15.458470 systemd[1]: Starting disk-uuid.service... Feb 9 18:35:15.464117 disk-uuid[563]: Primary Header is updated. Feb 9 18:35:15.464117 disk-uuid[563]: Secondary Entries is updated. Feb 9 18:35:15.464117 disk-uuid[563]: Secondary Header is updated. Feb 9 18:35:15.466942 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:35:16.496341 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 18:35:16.496386 disk-uuid[564]: The operation has completed successfully. Feb 9 18:35:16.521008 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:35:16.521102 systemd[1]: Finished disk-uuid.service. Feb 9 18:35:16.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.525113 systemd[1]: Starting verity-setup.service... Feb 9 18:35:16.550896 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:35:16.579948 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:35:16.581669 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:35:16.582562 systemd[1]: Finished verity-setup.service. Feb 9 18:35:16.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.636576 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:35:16.637733 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:35:16.637429 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:35:16.638151 systemd[1]: Starting ignition-setup.service... Feb 9 18:35:16.640171 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:35:16.651544 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:35:16.651595 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:35:16.651605 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:35:16.660754 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:35:16.672959 systemd[1]: Finished ignition-setup.service. Feb 9 18:35:16.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.676235 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:35:16.722196 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:35:16.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.723000 audit: BPF prog-id=9 op=LOAD Feb 9 18:35:16.724519 systemd[1]: Starting systemd-networkd.service... Feb 9 18:35:16.748219 systemd-networkd[734]: lo: Link UP Feb 9 18:35:16.749362 systemd-networkd[734]: lo: Gained carrier Feb 9 18:35:16.750541 systemd-networkd[734]: Enumeration completed Feb 9 18:35:16.751484 systemd[1]: Started systemd-networkd.service. Feb 9 18:35:16.752273 systemd[1]: Reached target network.target. Feb 9 18:35:16.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.753047 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:35:16.753969 systemd[1]: Starting iscsiuio.service... Feb 9 18:35:16.754915 systemd-networkd[734]: eth0: Link UP Feb 9 18:35:16.754920 systemd-networkd[734]: eth0: Gained carrier Feb 9 18:35:16.765725 systemd[1]: Started iscsiuio.service. Feb 9 18:35:16.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.767321 systemd[1]: Starting iscsid.service... Feb 9 18:35:16.771054 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:35:16.771054 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:35:16.771054 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:35:16.771054 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:35:16.771054 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:35:16.771054 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:35:16.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.774041 systemd[1]: Started iscsid.service. Feb 9 18:35:16.779457 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:35:16.785975 systemd-networkd[734]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:35:16.791210 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:35:16.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.792107 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:35:16.791961 ignition[671]: Ignition 2.14.0 Feb 9 18:35:16.793181 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:35:16.791968 ignition[671]: Stage: fetch-offline Feb 9 18:35:16.794383 systemd[1]: Reached target remote-fs.target. Feb 9 18:35:16.792006 ignition[671]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:35:16.796790 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:35:16.792015 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:35:16.792157 ignition[671]: parsed url from cmdline: "" Feb 9 18:35:16.792161 ignition[671]: no config URL provided Feb 9 18:35:16.792165 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:35:16.792172 ignition[671]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:35:16.792190 ignition[671]: op(1): [started] loading QEMU firmware config module Feb 9 18:35:16.792195 ignition[671]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 18:35:16.805359 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:35:16.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.806906 ignition[671]: op(1): [finished] loading QEMU firmware config module Feb 9 18:35:16.825820 ignition[671]: parsing config with SHA512: 5287c2c8bcd1bfde724c9649d11942669f8e5d63be82948e7c9ec99eeff724313a7fd9ec0746e5d835b7d6c092e5d95b26734f75737502a7db7c172921d2a057 Feb 9 18:35:16.848911 unknown[671]: fetched base config from "system" Feb 9 18:35:16.848925 unknown[671]: fetched user config from "qemu" Feb 9 18:35:16.849426 ignition[671]: fetch-offline: fetch-offline passed Feb 9 18:35:16.849487 ignition[671]: Ignition finished successfully Feb 9 18:35:16.850773 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:35:16.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.851948 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 18:35:16.852694 systemd[1]: Starting ignition-kargs.service... Feb 9 18:35:16.861045 ignition[761]: Ignition 2.14.0 Feb 9 18:35:16.861054 ignition[761]: Stage: kargs Feb 9 18:35:16.861148 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:35:16.861157 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:35:16.864122 systemd[1]: Finished ignition-kargs.service. Feb 9 18:35:16.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.862057 ignition[761]: kargs: kargs passed Feb 9 18:35:16.862098 ignition[761]: Ignition finished successfully Feb 9 18:35:16.865511 systemd[1]: Starting ignition-disks.service... Feb 9 18:35:16.871721 ignition[767]: Ignition 2.14.0 Feb 9 18:35:16.871731 ignition[767]: Stage: disks Feb 9 18:35:16.871818 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 9 18:35:16.873454 systemd[1]: Finished ignition-disks.service. Feb 9 18:35:16.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.871827 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:35:16.874576 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:35:16.872649 ignition[767]: disks: disks passed Feb 9 18:35:16.875469 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:35:16.872688 ignition[767]: Ignition finished successfully Feb 9 18:35:16.876620 systemd[1]: Reached target local-fs.target. Feb 9 18:35:16.877602 systemd[1]: Reached target sysinit.target. Feb 9 18:35:16.878438 systemd[1]: Reached target basic.target. Feb 9 18:35:16.880194 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:35:16.894188 systemd-fsck[775]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 18:35:16.897908 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:35:16.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.900660 systemd[1]: Mounting sysroot.mount... Feb 9 18:35:16.906651 systemd[1]: Mounted sysroot.mount. Feb 9 18:35:16.907658 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:35:16.907345 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:35:16.911122 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:35:16.912026 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 18:35:16.912066 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:35:16.912089 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:35:16.913898 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:35:16.916326 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:35:16.920957 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:35:16.926078 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:35:16.930060 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:35:16.933810 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:35:16.972799 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:35:16.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.974475 systemd[1]: Starting ignition-mount.service... Feb 9 18:35:16.975817 systemd[1]: Starting sysroot-boot.service... Feb 9 18:35:16.980464 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 18:35:16.989602 ignition[828]: INFO : Ignition 2.14.0 Feb 9 18:35:16.989602 ignition[828]: INFO : Stage: mount Feb 9 18:35:16.990859 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:35:16.990859 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:35:16.990859 ignition[828]: INFO : mount: mount passed Feb 9 18:35:16.993476 ignition[828]: INFO : Ignition finished successfully Feb 9 18:35:16.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:16.993272 systemd[1]: Finished ignition-mount.service. Feb 9 18:35:16.999051 systemd[1]: Finished sysroot-boot.service. Feb 9 18:35:16.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:17.593443 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:35:17.598875 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Feb 9 18:35:17.600081 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:35:17.600099 kernel: BTRFS info (device vda6): using free space tree Feb 9 18:35:17.600108 kernel: BTRFS info (device vda6): has skinny extents Feb 9 18:35:17.603194 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:35:17.604699 systemd[1]: Starting ignition-files.service... Feb 9 18:35:17.618368 ignition[856]: INFO : Ignition 2.14.0 Feb 9 18:35:17.618368 ignition[856]: INFO : Stage: files Feb 9 18:35:17.619573 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:35:17.619573 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:35:17.619573 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:35:17.622299 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:35:17.622299 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:35:17.625130 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:35:17.626170 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:35:17.626170 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:35:17.625893 unknown[856]: wrote ssh authorized keys file for user: core Feb 9 18:35:17.629036 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:35:17.629036 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:35:17.629036 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:35:17.629036 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:35:17.921138 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:35:18.054003 systemd-networkd[734]: eth0: Gained IPv6LL Feb 9 18:35:18.223397 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:35:18.223397 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:35:18.227131 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:35:18.227131 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:35:18.398801 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:35:18.517834 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:35:18.519963 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:35:18.519963 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:35:18.519963 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:35:18.569750 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:35:18.872107 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:35:18.872107 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:35:18.875393 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:35:18.875393 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:35:18.898848 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:35:19.565594 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:35:19.567719 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:35:19.567719 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:35:19.567719 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:35:19.567719 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:35:19.567719 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:35:19.567719 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:35:19.567719 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(b): [started] processing unit "containerd.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(b): op(c): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(b): [finished] processing unit "containerd.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(13): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:35:19.576452 ignition[856]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:35:19.600120 ignition[856]: INFO : files: op(14): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:35:19.600120 ignition[856]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:35:19.600120 ignition[856]: INFO : files: op(15): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 18:35:19.600120 ignition[856]: INFO : files: op(15): op(16): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:35:19.610212 ignition[856]: INFO : files: op(15): op(16): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 18:35:19.612261 ignition[856]: INFO : files: op(15): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 18:35:19.612261 ignition[856]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:35:19.612261 ignition[856]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:35:19.612261 ignition[856]: INFO : files: files passed Feb 9 18:35:19.612261 ignition[856]: INFO : Ignition finished successfully Feb 9 18:35:19.620551 kernel: kauditd_printk_skb: 21 callbacks suppressed Feb 9 18:35:19.620571 kernel: audit: type=1130 audit(1707503719.613:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.612363 systemd[1]: Finished ignition-files.service. Feb 9 18:35:19.614839 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:35:19.617896 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:35:19.623638 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 18:35:19.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.618517 systemd[1]: Starting ignition-quench.service... Feb 9 18:35:19.628696 kernel: audit: type=1130 audit(1707503719.623:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.628713 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:35:19.623023 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:35:19.624860 systemd[1]: Reached target ignition-complete.target. Feb 9 18:35:19.636068 kernel: audit: type=1130 audit(1707503719.631:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.636091 kernel: audit: type=1131 audit(1707503719.631:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.628807 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:35:19.630918 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:35:19.631003 systemd[1]: Finished ignition-quench.service. Feb 9 18:35:19.640570 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:35:19.640658 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:35:19.645689 kernel: audit: type=1130 audit(1707503719.641:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.645705 kernel: audit: type=1131 audit(1707503719.641:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.642088 systemd[1]: Reached target initrd-fs.target. Feb 9 18:35:19.646389 systemd[1]: Reached target initrd.target. Feb 9 18:35:19.647501 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:35:19.648410 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:35:19.658188 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:35:19.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.659619 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:35:19.662190 kernel: audit: type=1130 audit(1707503719.658:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.667107 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:35:19.668126 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:35:19.669214 systemd[1]: Stopped target timers.target. Feb 9 18:35:19.670450 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:35:19.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.670552 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:35:19.674857 kernel: audit: type=1131 audit(1707503719.670:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.671699 systemd[1]: Stopped target initrd.target. Feb 9 18:35:19.674565 systemd[1]: Stopped target basic.target. Feb 9 18:35:19.675626 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:35:19.676766 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:35:19.677916 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:35:19.679192 systemd[1]: Stopped target remote-fs.target. Feb 9 18:35:19.680357 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:35:19.682038 systemd[1]: Stopped target sysinit.target. Feb 9 18:35:19.683124 systemd[1]: Stopped target local-fs.target. Feb 9 18:35:19.684281 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:35:19.685899 systemd[1]: Stopped target swap.target. Feb 9 18:35:19.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.686782 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:35:19.691239 kernel: audit: type=1131 audit(1707503719.687:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.686913 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:35:19.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.688124 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:35:19.695264 kernel: audit: type=1131 audit(1707503719.691:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.690729 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:35:19.690827 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:35:19.692062 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:35:19.692162 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:35:19.694996 systemd[1]: Stopped target paths.target. Feb 9 18:35:19.695969 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:35:19.699912 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:35:19.701388 systemd[1]: Stopped target slices.target. Feb 9 18:35:19.702211 systemd[1]: Stopped target sockets.target. Feb 9 18:35:19.703282 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:35:19.703351 systemd[1]: Closed iscsid.socket. Feb 9 18:35:19.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.704347 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:35:19.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.704443 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:35:19.705799 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:35:19.705945 systemd[1]: Stopped ignition-files.service. Feb 9 18:35:19.707659 systemd[1]: Stopping ignition-mount.service... Feb 9 18:35:19.708915 systemd[1]: Stopping iscsiuio.service... Feb 9 18:35:19.710589 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:35:19.711658 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:35:19.711776 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:35:19.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.713094 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:35:19.713187 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:35:19.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.717547 ignition[896]: INFO : Ignition 2.14.0 Feb 9 18:35:19.717547 ignition[896]: INFO : Stage: umount Feb 9 18:35:19.717547 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 18:35:19.717547 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 18:35:19.717547 ignition[896]: INFO : umount: umount passed Feb 9 18:35:19.717547 ignition[896]: INFO : Ignition finished successfully Feb 9 18:35:19.715610 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:35:19.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.715695 systemd[1]: Stopped iscsiuio.service. Feb 9 18:35:19.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.717346 systemd[1]: Stopped target network.target. Feb 9 18:35:19.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.718432 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:35:19.718465 systemd[1]: Closed iscsiuio.socket. Feb 9 18:35:19.719966 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:35:19.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.720917 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:35:19.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.723139 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:35:19.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.723591 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:35:19.723672 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:35:19.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.725261 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:35:19.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.725333 systemd[1]: Stopped ignition-mount.service. Feb 9 18:35:19.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.726860 systemd-networkd[734]: eth0: DHCPv6 lease lost Feb 9 18:35:19.728010 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:35:19.728095 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:35:19.744000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:35:19.729586 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:35:19.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.729615 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:35:19.731686 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:35:19.748000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:35:19.731728 systemd[1]: Stopped ignition-disks.service. Feb 9 18:35:19.732596 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:35:19.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.732629 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:35:19.733705 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:35:19.733740 systemd[1]: Stopped ignition-setup.service. Feb 9 18:35:19.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.735492 systemd[1]: Stopping network-cleanup.service... Feb 9 18:35:19.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.736023 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:35:19.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.736076 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:35:19.737347 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:35:19.737388 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:35:19.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.738910 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:35:19.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.738953 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:35:19.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.739949 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:35:19.744290 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:35:19.744699 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:35:19.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.744788 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:35:19.748582 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:35:19.748677 systemd[1]: Stopped network-cleanup.service. Feb 9 18:35:19.750550 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:35:19.750660 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:35:19.751810 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:35:19.751851 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:35:19.752913 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:35:19.752948 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:35:19.754146 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:35:19.754191 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:35:19.755494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:35:19.755536 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:35:19.756773 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:35:19.756813 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:35:19.758889 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:35:19.760103 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:35:19.760163 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:35:19.762113 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:35:19.762153 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:35:19.763043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:35:19.763082 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:35:19.765814 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:35:19.766340 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:35:19.766420 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:35:19.786524 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:35:19.786621 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:35:19.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.788027 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:35:19.789049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:35:19.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:19.789095 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:35:19.791380 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:35:19.797113 systemd[1]: Switching root. Feb 9 18:35:19.797000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:35:19.797000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:35:19.797000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:35:19.797000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:35:19.797000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:35:19.816194 iscsid[744]: iscsid shutting down. Feb 9 18:35:19.816699 systemd-journald[289]: Journal stopped Feb 9 18:35:21.909586 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Feb 9 18:35:21.909640 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:35:21.909657 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:35:21.909669 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:35:21.909679 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:35:21.909689 kernel: SELinux: policy capability open_perms=1 Feb 9 18:35:21.909698 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:35:21.909709 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:35:21.909719 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:35:21.909729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:35:21.909738 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:35:21.909747 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:35:21.909757 systemd[1]: Successfully loaded SELinux policy in 33.084ms. Feb 9 18:35:21.909773 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.790ms. Feb 9 18:35:21.909785 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:35:21.909799 systemd[1]: Detected virtualization kvm. Feb 9 18:35:21.909809 systemd[1]: Detected architecture arm64. Feb 9 18:35:21.909819 systemd[1]: Detected first boot. Feb 9 18:35:21.909829 systemd[1]: Initializing machine ID from VM UUID. Feb 9 18:35:21.909847 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:35:21.909857 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:35:21.909883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:35:21.909895 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:35:21.909909 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:35:21.909920 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:35:21.909930 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 18:35:21.909942 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:35:21.909952 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:35:21.909965 systemd[1]: Created slice system-getty.slice. Feb 9 18:35:21.909976 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:35:21.909987 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:35:21.909998 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:35:21.910008 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:35:21.910018 systemd[1]: Created slice user.slice. Feb 9 18:35:21.910028 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:35:21.910041 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:35:21.910051 systemd[1]: Set up automount boot.automount. Feb 9 18:35:21.910061 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:35:21.910071 systemd[1]: Reached target integritysetup.target. Feb 9 18:35:21.910082 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:35:21.910096 systemd[1]: Reached target remote-fs.target. Feb 9 18:35:21.910106 systemd[1]: Reached target slices.target. Feb 9 18:35:21.910116 systemd[1]: Reached target swap.target. Feb 9 18:35:21.910195 systemd[1]: Reached target torcx.target. Feb 9 18:35:21.910210 systemd[1]: Reached target veritysetup.target. Feb 9 18:35:21.910221 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:35:21.910231 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:35:21.910244 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:35:21.910254 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:35:21.910265 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:35:21.910275 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:35:21.910286 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:35:21.910296 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:35:21.910306 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:35:21.910317 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:35:21.910327 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:35:21.910337 systemd[1]: Mounting media.mount... Feb 9 18:35:21.910348 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:35:21.910359 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:35:21.910369 systemd[1]: Mounting tmp.mount... Feb 9 18:35:21.910380 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:35:21.910390 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:35:21.910401 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:35:21.910411 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:35:21.910422 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:35:21.910433 systemd[1]: Starting modprobe@drm.service... Feb 9 18:35:21.910444 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:35:21.910455 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:35:21.910466 systemd[1]: Starting modprobe@loop.service... Feb 9 18:35:21.910477 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:35:21.910488 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 18:35:21.910498 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 18:35:21.910509 systemd[1]: Starting systemd-journald.service... Feb 9 18:35:21.910519 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:35:21.910530 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:35:21.910541 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:35:21.910557 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:35:21.910567 kernel: fuse: init (API version 7.34) Feb 9 18:35:21.910577 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:35:21.910592 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:35:21.910602 systemd[1]: Mounted media.mount. Feb 9 18:35:21.910613 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:35:21.910624 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:35:21.910634 systemd[1]: Mounted tmp.mount. Feb 9 18:35:21.910646 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:35:21.910660 kernel: loop: module loaded Feb 9 18:35:21.910675 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:35:21.910686 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:35:21.910697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:35:21.910707 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:35:21.910720 systemd-journald[1027]: Journal started Feb 9 18:35:21.910765 systemd-journald[1027]: Runtime Journal (/run/log/journal/90f730c2a060472696928f232c3487ed) is 6.0M, max 48.7M, 42.6M free. Feb 9 18:35:21.811000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:35:21.811000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 18:35:21.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.904000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:35:21.904000 audit[1027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffece13350 a2=4000 a3=1 items=0 ppid=1 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:21.904000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:35:21.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.912643 systemd[1]: Started systemd-journald.service. Feb 9 18:35:21.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.913371 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:35:21.913570 systemd[1]: Finished modprobe@drm.service. Feb 9 18:35:21.914672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:35:21.914858 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:35:21.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.915928 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:35:21.916098 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:35:21.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.917096 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:35:21.917394 systemd[1]: Finished modprobe@loop.service. Feb 9 18:35:21.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.920452 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:35:21.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.921876 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:35:21.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.923100 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:35:21.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.924046 systemd[1]: Reached target network-pre.target. Feb 9 18:35:21.925734 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:35:21.927610 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:35:21.928443 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:35:21.931618 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:35:21.933936 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:35:21.935556 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:35:21.936633 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:35:21.937466 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:35:21.940083 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:35:21.943831 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:35:21.944875 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:35:21.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.955274 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:35:21.957427 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:35:21.963058 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:35:21.965168 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:35:21.975313 systemd-journald[1027]: Time spent on flushing to /var/log/journal/90f730c2a060472696928f232c3487ed is 12.370ms for 946 entries. Feb 9 18:35:21.975313 systemd-journald[1027]: System Journal (/var/log/journal/90f730c2a060472696928f232c3487ed) is 8.0M, max 195.6M, 187.6M free. Feb 9 18:35:21.999532 systemd-journald[1027]: Received client request to flush runtime journal. Feb 9 18:35:21.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:21.973320 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:35:21.999975 udevadm[1083]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 18:35:21.974425 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:35:21.987365 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:35:21.989936 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:35:21.992019 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:35:22.000396 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:35:22.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.010006 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:35:22.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.314295 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:35:22.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.316319 systemd[1]: Starting systemd-udevd.service... Feb 9 18:35:22.331813 systemd-udevd[1094]: Using default interface naming scheme 'v252'. Feb 9 18:35:22.344762 systemd[1]: Started systemd-udevd.service. Feb 9 18:35:22.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.347310 systemd[1]: Starting systemd-networkd.service... Feb 9 18:35:22.361796 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 18:35:22.373133 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:35:22.401667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:35:22.409591 systemd[1]: Started systemd-userdbd.service. Feb 9 18:35:22.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.452236 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:35:22.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.454155 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:35:22.463139 systemd-networkd[1103]: lo: Link UP Feb 9 18:35:22.463149 systemd-networkd[1103]: lo: Gained carrier Feb 9 18:35:22.463469 systemd-networkd[1103]: Enumeration completed Feb 9 18:35:22.463572 systemd[1]: Started systemd-networkd.service. Feb 9 18:35:22.463574 systemd-networkd[1103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:35:22.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.466721 systemd-networkd[1103]: eth0: Link UP Feb 9 18:35:22.466730 systemd-networkd[1103]: eth0: Gained carrier Feb 9 18:35:22.469951 lvm[1128]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:35:22.493693 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:35:22.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.494455 systemd[1]: Reached target cryptsetup.target. Feb 9 18:35:22.496062 systemd[1]: Starting lvm2-activation.service... Feb 9 18:35:22.499441 lvm[1130]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:35:22.499977 systemd-networkd[1103]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 18:35:22.532627 systemd[1]: Finished lvm2-activation.service. Feb 9 18:35:22.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.533331 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:35:22.533931 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:35:22.533957 systemd[1]: Reached target local-fs.target. Feb 9 18:35:22.534505 systemd[1]: Reached target machines.target. Feb 9 18:35:22.536068 systemd[1]: Starting ldconfig.service... Feb 9 18:35:22.536830 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:35:22.536938 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:35:22.537928 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:35:22.539637 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:35:22.541704 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:35:22.542822 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:35:22.542896 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:35:22.543845 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:35:22.547043 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1133 (bootctl) Feb 9 18:35:22.548251 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:35:22.560415 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:35:22.562209 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:35:22.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.565221 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:35:22.566281 systemd-tmpfiles[1136]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:35:22.631504 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:35:22.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.656386 systemd-fsck[1142]: fsck.fat 4.2 (2021-01-31) Feb 9 18:35:22.656386 systemd-fsck[1142]: /dev/vda1: 236 files, 113719/258078 clusters Feb 9 18:35:22.658431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:35:22.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.716883 ldconfig[1132]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:35:22.719706 systemd[1]: Finished ldconfig.service. Feb 9 18:35:22.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.888610 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:35:22.890065 systemd[1]: Mounting boot.mount... Feb 9 18:35:22.896624 systemd[1]: Mounted boot.mount. Feb 9 18:35:22.904195 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:35:22.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.951042 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:35:22.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.952944 systemd[1]: Starting audit-rules.service... Feb 9 18:35:22.954559 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:35:22.956260 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:35:22.958432 systemd[1]: Starting systemd-resolved.service... Feb 9 18:35:22.960633 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:35:22.962512 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:35:22.963987 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:35:22.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.966134 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:35:22.973660 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:35:22.973000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.977803 systemd[1]: Starting systemd-update-done.service... Feb 9 18:35:22.980188 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:35:22.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.987538 systemd[1]: Finished systemd-update-done.service. Feb 9 18:35:22.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:22.991000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:35:22.991000 audit[1176]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffca8e8200 a2=420 a3=0 items=0 ppid=1151 pid=1176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:22.991000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:35:22.992484 augenrules[1176]: No rules Feb 9 18:35:22.993364 systemd[1]: Finished audit-rules.service. Feb 9 18:35:23.015999 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:35:23.017162 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 18:35:23.017219 systemd-timesyncd[1157]: Initial clock synchronization to Fri 2024-02-09 18:35:22.912733 UTC. Feb 9 18:35:23.017260 systemd[1]: Reached target time-set.target. Feb 9 18:35:23.028021 systemd-resolved[1156]: Positive Trust Anchors: Feb 9 18:35:23.028032 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:35:23.028059 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:35:23.038934 systemd-resolved[1156]: Defaulting to hostname 'linux'. Feb 9 18:35:23.043736 systemd[1]: Started systemd-resolved.service. Feb 9 18:35:23.044620 systemd[1]: Reached target network.target. Feb 9 18:35:23.045387 systemd[1]: Reached target nss-lookup.target. Feb 9 18:35:23.046186 systemd[1]: Reached target sysinit.target. Feb 9 18:35:23.047017 systemd[1]: Started motdgen.path. Feb 9 18:35:23.047691 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:35:23.048929 systemd[1]: Started logrotate.timer. Feb 9 18:35:23.049699 systemd[1]: Started mdadm.timer. Feb 9 18:35:23.050368 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:35:23.051186 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:35:23.051216 systemd[1]: Reached target paths.target. Feb 9 18:35:23.051941 systemd[1]: Reached target timers.target. Feb 9 18:35:23.052972 systemd[1]: Listening on dbus.socket. Feb 9 18:35:23.054762 systemd[1]: Starting docker.socket... Feb 9 18:35:23.056376 systemd[1]: Listening on sshd.socket. Feb 9 18:35:23.057205 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:35:23.057496 systemd[1]: Listening on docker.socket. Feb 9 18:35:23.058283 systemd[1]: Reached target sockets.target. Feb 9 18:35:23.059039 systemd[1]: Reached target basic.target. Feb 9 18:35:23.059893 systemd[1]: System is tainted: cgroupsv1 Feb 9 18:35:23.059939 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:35:23.059958 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:35:23.060911 systemd[1]: Starting containerd.service... Feb 9 18:35:23.062571 systemd[1]: Starting dbus.service... Feb 9 18:35:23.064405 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:35:23.066466 systemd[1]: Starting extend-filesystems.service... Feb 9 18:35:23.067342 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:35:23.068669 systemd[1]: Starting motdgen.service... Feb 9 18:35:23.070559 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:35:23.072568 systemd[1]: Starting prepare-critools.service... Feb 9 18:35:23.074952 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:35:23.076963 systemd[1]: Starting sshd-keygen.service... Feb 9 18:35:23.080141 systemd[1]: Starting systemd-logind.service... Feb 9 18:35:23.080714 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:35:23.080789 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:35:23.082108 jq[1188]: false Feb 9 18:35:23.082138 systemd[1]: Starting update-engine.service... Feb 9 18:35:23.083943 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:35:23.087623 jq[1206]: true Feb 9 18:35:23.088958 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:35:23.089237 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:35:23.090848 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:35:23.091092 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:35:23.109471 jq[1217]: true Feb 9 18:35:23.113008 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:35:23.113289 systemd[1]: Finished motdgen.service. Feb 9 18:35:23.118021 dbus-daemon[1187]: [system] SELinux support is enabled Feb 9 18:35:23.120322 systemd[1]: Started dbus.service. Feb 9 18:35:23.122462 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:35:23.122520 systemd[1]: Reached target system-config.target. Feb 9 18:35:23.123203 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:35:23.123226 systemd[1]: Reached target user-config.target. Feb 9 18:35:23.128729 tar[1213]: ./ Feb 9 18:35:23.128729 tar[1213]: ./macvlan Feb 9 18:35:23.129626 extend-filesystems[1189]: Found vda Feb 9 18:35:23.130536 extend-filesystems[1189]: Found vda1 Feb 9 18:35:23.130536 extend-filesystems[1189]: Found vda2 Feb 9 18:35:23.130536 extend-filesystems[1189]: Found vda3 Feb 9 18:35:23.130536 extend-filesystems[1189]: Found usr Feb 9 18:35:23.130536 extend-filesystems[1189]: Found vda4 Feb 9 18:35:23.130536 extend-filesystems[1189]: Found vda6 Feb 9 18:35:23.130536 extend-filesystems[1189]: Found vda7 Feb 9 18:35:23.130536 extend-filesystems[1189]: Found vda9 Feb 9 18:35:23.130536 extend-filesystems[1189]: Checking size of /dev/vda9 Feb 9 18:35:23.138641 tar[1214]: crictl Feb 9 18:35:23.162951 extend-filesystems[1189]: Resized partition /dev/vda9 Feb 9 18:35:23.164313 extend-filesystems[1249]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 18:35:23.166741 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 18:35:23.171885 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 18:35:23.178911 systemd-logind[1202]: New seat seat0. Feb 9 18:35:23.183254 systemd[1]: Started systemd-logind.service. Feb 9 18:35:23.204035 systemd[1]: Started update-engine.service. Feb 9 18:35:23.209162 update_engine[1204]: I0209 18:35:23.201696 1204 main.cc:92] Flatcar Update Engine starting Feb 9 18:35:23.209162 update_engine[1204]: I0209 18:35:23.204044 1204 update_check_scheduler.cc:74] Next update check in 5m12s Feb 9 18:35:23.206627 systemd[1]: Started locksmithd.service. Feb 9 18:35:23.215147 tar[1213]: ./static Feb 9 18:35:23.216889 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 18:35:23.236519 extend-filesystems[1249]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 18:35:23.236519 extend-filesystems[1249]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 18:35:23.236519 extend-filesystems[1249]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 18:35:23.239607 extend-filesystems[1189]: Resized filesystem in /dev/vda9 Feb 9 18:35:23.238474 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:35:23.241763 env[1218]: time="2024-02-09T18:35:23.236878000Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:35:23.238712 systemd[1]: Finished extend-filesystems.service. Feb 9 18:35:23.242361 bash[1245]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:35:23.242942 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:35:23.262696 env[1218]: time="2024-02-09T18:35:23.262650760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:35:23.262833 env[1218]: time="2024-02-09T18:35:23.262808680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:23.263997 env[1218]: time="2024-02-09T18:35:23.263965440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:35:23.264052 env[1218]: time="2024-02-09T18:35:23.263997320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:23.264267 env[1218]: time="2024-02-09T18:35:23.264242280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:35:23.264267 env[1218]: time="2024-02-09T18:35:23.264264960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:23.264323 env[1218]: time="2024-02-09T18:35:23.264278560Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:35:23.264323 env[1218]: time="2024-02-09T18:35:23.264289040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:23.264376 env[1218]: time="2024-02-09T18:35:23.264359240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:23.264649 env[1218]: time="2024-02-09T18:35:23.264627560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:35:23.264792 env[1218]: time="2024-02-09T18:35:23.264772360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:35:23.264823 env[1218]: time="2024-02-09T18:35:23.264791560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:35:23.264896 env[1218]: time="2024-02-09T18:35:23.264853000Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:35:23.264936 env[1218]: time="2024-02-09T18:35:23.264895680Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:35:23.265622 tar[1213]: ./vlan Feb 9 18:35:23.273067 env[1218]: time="2024-02-09T18:35:23.273031920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:35:23.273067 env[1218]: time="2024-02-09T18:35:23.273091520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:35:23.273067 env[1218]: time="2024-02-09T18:35:23.273109040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:35:23.273258 env[1218]: time="2024-02-09T18:35:23.273150840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.273258 env[1218]: time="2024-02-09T18:35:23.273168920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.273258 env[1218]: time="2024-02-09T18:35:23.273182360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.273258 env[1218]: time="2024-02-09T18:35:23.273195280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.273590 env[1218]: time="2024-02-09T18:35:23.273567520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.273618 env[1218]: time="2024-02-09T18:35:23.273592560Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.273618 env[1218]: time="2024-02-09T18:35:23.273606720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.273661 env[1218]: time="2024-02-09T18:35:23.273619960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.273661 env[1218]: time="2024-02-09T18:35:23.273633800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:35:23.273785 env[1218]: time="2024-02-09T18:35:23.273764920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:35:23.273896 env[1218]: time="2024-02-09T18:35:23.273853360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:35:23.274184 env[1218]: time="2024-02-09T18:35:23.274165720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:35:23.274222 env[1218]: time="2024-02-09T18:35:23.274196360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274222 env[1218]: time="2024-02-09T18:35:23.274209880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:35:23.274323 env[1218]: time="2024-02-09T18:35:23.274308960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274356 env[1218]: time="2024-02-09T18:35:23.274324680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274356 env[1218]: time="2024-02-09T18:35:23.274337400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274356 env[1218]: time="2024-02-09T18:35:23.274348600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274412 env[1218]: time="2024-02-09T18:35:23.274361680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274412 env[1218]: time="2024-02-09T18:35:23.274379680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274412 env[1218]: time="2024-02-09T18:35:23.274391120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274412 env[1218]: time="2024-02-09T18:35:23.274401600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274489 env[1218]: time="2024-02-09T18:35:23.274414000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:35:23.274556 env[1218]: time="2024-02-09T18:35:23.274536840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274584 env[1218]: time="2024-02-09T18:35:23.274563040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274584 env[1218]: time="2024-02-09T18:35:23.274576760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274622 env[1218]: time="2024-02-09T18:35:23.274587760Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:35:23.274622 env[1218]: time="2024-02-09T18:35:23.274601800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:35:23.274622 env[1218]: time="2024-02-09T18:35:23.274612240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:35:23.274682 env[1218]: time="2024-02-09T18:35:23.274629040Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:35:23.274682 env[1218]: time="2024-02-09T18:35:23.274661200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:35:23.274930 env[1218]: time="2024-02-09T18:35:23.274860520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:35:23.275562 env[1218]: time="2024-02-09T18:35:23.274940080Z" level=info msg="Connect containerd service" Feb 9 18:35:23.275562 env[1218]: time="2024-02-09T18:35:23.274973680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:35:23.275687 env[1218]: time="2024-02-09T18:35:23.275654120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:35:23.275878 env[1218]: time="2024-02-09T18:35:23.275843920Z" level=info msg="Start subscribing containerd event" Feb 9 18:35:23.275930 env[1218]: time="2024-02-09T18:35:23.275915800Z" level=info msg="Start recovering state" Feb 9 18:35:23.275989 env[1218]: time="2024-02-09T18:35:23.275975640Z" level=info msg="Start event monitor" Feb 9 18:35:23.276021 env[1218]: time="2024-02-09T18:35:23.275992720Z" level=info msg="Start snapshots syncer" Feb 9 18:35:23.276021 env[1218]: time="2024-02-09T18:35:23.276002160Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:35:23.276021 env[1218]: time="2024-02-09T18:35:23.276010360Z" level=info msg="Start streaming server" Feb 9 18:35:23.276383 env[1218]: time="2024-02-09T18:35:23.276362320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:35:23.276414 env[1218]: time="2024-02-09T18:35:23.276407320Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:35:23.276551 systemd[1]: Started containerd.service. Feb 9 18:35:23.280679 env[1218]: time="2024-02-09T18:35:23.280649960Z" level=info msg="containerd successfully booted in 0.073042s" Feb 9 18:35:23.302823 tar[1213]: ./portmap Feb 9 18:35:23.331012 tar[1213]: ./host-local Feb 9 18:35:23.353331 tar[1213]: ./vrf Feb 9 18:35:23.378131 tar[1213]: ./bridge Feb 9 18:35:23.389448 locksmithd[1251]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:35:23.408189 tar[1213]: ./tuning Feb 9 18:35:23.431605 tar[1213]: ./firewall Feb 9 18:35:23.460809 tar[1213]: ./host-device Feb 9 18:35:23.486905 tar[1213]: ./sbr Feb 9 18:35:23.510557 tar[1213]: ./loopback Feb 9 18:35:23.533483 tar[1213]: ./dhcp Feb 9 18:35:23.587817 systemd[1]: Finished prepare-critools.service. Feb 9 18:35:23.602706 tar[1213]: ./ptp Feb 9 18:35:23.630641 tar[1213]: ./ipvlan Feb 9 18:35:23.657911 tar[1213]: ./bandwidth Feb 9 18:35:23.695204 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:35:24.261994 systemd-networkd[1103]: eth0: Gained IPv6LL Feb 9 18:35:24.988401 sshd_keygen[1219]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:35:25.005661 systemd[1]: Finished sshd-keygen.service. Feb 9 18:35:25.008134 systemd[1]: Starting issuegen.service... Feb 9 18:35:25.012452 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:35:25.012654 systemd[1]: Finished issuegen.service. Feb 9 18:35:25.014781 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:35:25.023401 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:35:25.025643 systemd[1]: Started getty@tty1.service. Feb 9 18:35:25.027682 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:35:25.028765 systemd[1]: Reached target getty.target. Feb 9 18:35:25.029635 systemd[1]: Reached target multi-user.target. Feb 9 18:35:25.031901 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:35:25.037825 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:35:25.038043 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:35:25.039107 systemd[1]: Startup finished in 5.877s (kernel) + 5.171s (userspace) = 11.049s. Feb 9 18:35:27.367510 systemd[1]: Created slice system-sshd.slice. Feb 9 18:35:27.368583 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:55588.service. Feb 9 18:35:27.418384 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 55588 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:27.420581 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:27.427844 systemd[1]: Created slice user-500.slice. Feb 9 18:35:27.428699 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:35:27.431901 systemd-logind[1202]: New session 1 of user core. Feb 9 18:35:27.436185 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:35:27.437279 systemd[1]: Starting user@500.service... Feb 9 18:35:27.439906 (systemd)[1294]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:27.494739 systemd[1294]: Queued start job for default target default.target. Feb 9 18:35:27.494960 systemd[1294]: Reached target paths.target. Feb 9 18:35:27.494975 systemd[1294]: Reached target sockets.target. Feb 9 18:35:27.494986 systemd[1294]: Reached target timers.target. Feb 9 18:35:27.495007 systemd[1294]: Reached target basic.target. Feb 9 18:35:27.495049 systemd[1294]: Reached target default.target. Feb 9 18:35:27.495074 systemd[1294]: Startup finished in 50ms. Feb 9 18:35:27.495158 systemd[1]: Started user@500.service. Feb 9 18:35:27.496097 systemd[1]: Started session-1.scope. Feb 9 18:35:27.546693 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:55596.service. Feb 9 18:35:27.588235 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 55596 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:27.589661 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:27.592793 systemd-logind[1202]: New session 2 of user core. Feb 9 18:35:27.593578 systemd[1]: Started session-2.scope. Feb 9 18:35:27.647074 sshd[1304]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:27.649414 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:55612.service. Feb 9 18:35:27.649827 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:55596.service: Deactivated successfully. Feb 9 18:35:27.650794 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. Feb 9 18:35:27.650847 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 18:35:27.651538 systemd-logind[1202]: Removed session 2. Feb 9 18:35:27.692196 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 55612 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:27.693391 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:27.696539 systemd-logind[1202]: New session 3 of user core. Feb 9 18:35:27.697305 systemd[1]: Started session-3.scope. Feb 9 18:35:27.746934 sshd[1310]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:27.749198 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:55618.service. Feb 9 18:35:27.749626 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:55612.service: Deactivated successfully. Feb 9 18:35:27.750550 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. Feb 9 18:35:27.750614 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 18:35:27.751588 systemd-logind[1202]: Removed session 3. Feb 9 18:35:27.790402 sshd[1316]: Accepted publickey for core from 10.0.0.1 port 55618 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:27.791742 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:27.794832 systemd-logind[1202]: New session 4 of user core. Feb 9 18:35:27.795639 systemd[1]: Started session-4.scope. Feb 9 18:35:27.848133 sshd[1316]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:27.850261 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:55634.service. Feb 9 18:35:27.850675 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:55618.service: Deactivated successfully. Feb 9 18:35:27.851611 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:35:27.851679 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:35:27.852621 systemd-logind[1202]: Removed session 4. Feb 9 18:35:27.891346 sshd[1323]: Accepted publickey for core from 10.0.0.1 port 55634 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:27.892722 sshd[1323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:27.895878 systemd-logind[1202]: New session 5 of user core. Feb 9 18:35:27.896658 systemd[1]: Started session-5.scope. Feb 9 18:35:27.954408 sudo[1329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 18:35:27.954623 sudo[1329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:35:27.965999 dbus-daemon[1187]: avc: received setenforce notice (enforcing=1) Feb 9 18:35:27.966792 sudo[1329]: pam_unix(sudo:session): session closed for user root Feb 9 18:35:27.968530 sshd[1323]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:27.970916 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:55650.service. Feb 9 18:35:27.971461 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:55634.service: Deactivated successfully. Feb 9 18:35:27.972327 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:35:27.972372 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:35:27.973032 systemd-logind[1202]: Removed session 5. Feb 9 18:35:28.012322 sshd[1331]: Accepted publickey for core from 10.0.0.1 port 55650 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:28.013416 sshd[1331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:28.016903 systemd-logind[1202]: New session 6 of user core. Feb 9 18:35:28.017654 systemd[1]: Started session-6.scope. Feb 9 18:35:28.068673 sudo[1338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 18:35:28.069182 sudo[1338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:35:28.071827 sudo[1338]: pam_unix(sudo:session): session closed for user root Feb 9 18:35:28.075967 sudo[1337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 18:35:28.076171 sudo[1337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:35:28.083992 systemd[1]: Stopping audit-rules.service... Feb 9 18:35:28.084000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 18:35:28.085161 auditctl[1341]: No rules Feb 9 18:35:28.085419 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 18:35:28.085625 systemd[1]: Stopped audit-rules.service. Feb 9 18:35:28.086743 kernel: kauditd_printk_skb: 97 callbacks suppressed Feb 9 18:35:28.086808 kernel: audit: type=1305 audit(1707503728.084:128): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 18:35:28.086833 kernel: audit: type=1300 audit(1707503728.084:128): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc31cd590 a2=420 a3=0 items=0 ppid=1 pid=1341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:28.084000 audit[1341]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc31cd590 a2=420 a3=0 items=0 ppid=1 pid=1341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:28.087040 systemd[1]: Starting audit-rules.service... Feb 9 18:35:28.089258 kernel: audit: type=1327 audit(1707503728.084:128): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 18:35:28.084000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 18:35:28.089994 kernel: audit: type=1131 audit(1707503728.084:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.101992 augenrules[1359]: No rules Feb 9 18:35:28.102610 systemd[1]: Finished audit-rules.service. Feb 9 18:35:28.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.103652 sudo[1337]: pam_unix(sudo:session): session closed for user root Feb 9 18:35:28.102000 audit[1337]: USER_END pid=1337 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.107285 kernel: audit: type=1130 audit(1707503728.101:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.107343 kernel: audit: type=1106 audit(1707503728.102:131): pid=1337 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.107536 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:55652.service. Feb 9 18:35:28.107651 sshd[1331]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:28.102000 audit[1337]: CRED_DISP pid=1337 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.110476 kernel: audit: type=1104 audit(1707503728.102:132): pid=1337 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-10.0.0.1:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.112777 kernel: audit: type=1130 audit(1707503728.106:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-10.0.0.1:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.111000 audit[1331]: USER_END pid=1331 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:28.114132 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:55650.service: Deactivated successfully. Feb 9 18:35:28.114764 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:35:28.116025 systemd-logind[1202]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:35:28.116759 kernel: audit: type=1106 audit(1707503728.111:134): pid=1331 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:28.116932 kernel: audit: type=1104 audit(1707503728.111:135): pid=1331 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:28.111000 audit[1331]: CRED_DISP pid=1331 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:28.117287 systemd-logind[1202]: Removed session 6. Feb 9 18:35:28.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.94:22-10.0.0.1:55650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.149000 audit[1364]: USER_ACCT pid=1364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:28.150759 sshd[1364]: Accepted publickey for core from 10.0.0.1 port 55652 ssh2: RSA SHA256:+mjqzBx0jAExxLmXQAd9A2jZt2TW46e6NSOiRFb14I8 Feb 9 18:35:28.150000 audit[1364]: CRED_ACQ pid=1364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:28.150000 audit[1364]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2b4a630 a2=3 a3=1 items=0 ppid=1 pid=1364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:28.150000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:35:28.152201 sshd[1364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:35:28.156217 systemd[1]: Started session-7.scope. Feb 9 18:35:28.156559 systemd-logind[1202]: New session 7 of user core. Feb 9 18:35:28.159000 audit[1364]: USER_START pid=1364 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:28.160000 audit[1369]: CRED_ACQ pid=1369 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:28.206000 audit[1370]: USER_ACCT pid=1370 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.207812 sudo[1370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:35:28.208050 sudo[1370]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:35:28.207000 audit[1370]: CRED_REFR pid=1370 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.209000 audit[1370]: USER_START pid=1370 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.747924 systemd[1]: Reloading. Feb 9 18:35:28.785581 /usr/lib/systemd/system-generators/torcx-generator[1400]: time="2024-02-09T18:35:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:35:28.785608 /usr/lib/systemd/system-generators/torcx-generator[1400]: time="2024-02-09T18:35:28Z" level=info msg="torcx already run" Feb 9 18:35:28.846428 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:35:28.846444 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:35:28.863149 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:35:28.914644 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:35:28.920493 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:35:28.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.920902 systemd[1]: Reached target network-online.target. Feb 9 18:35:28.922246 systemd[1]: Started kubelet.service. Feb 9 18:35:28.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.931421 systemd[1]: Starting coreos-metadata.service... Feb 9 18:35:28.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:28.937702 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 9 18:35:28.937930 systemd[1]: Finished coreos-metadata.service. Feb 9 18:35:29.098646 kubelet[1444]: E0209 18:35:29.098488 1444 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:35:29.100502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:35:29.100632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:35:29.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:35:29.214817 systemd[1]: Stopped kubelet.service. Feb 9 18:35:29.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:29.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:29.229844 systemd[1]: Reloading. Feb 9 18:35:29.286229 /usr/lib/systemd/system-generators/torcx-generator[1516]: time="2024-02-09T18:35:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:35:29.286258 /usr/lib/systemd/system-generators/torcx-generator[1516]: time="2024-02-09T18:35:29Z" level=info msg="torcx already run" Feb 9 18:35:29.346457 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:35:29.346602 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:35:29.363345 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:35:29.421302 systemd[1]: Started kubelet.service. Feb 9 18:35:29.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:29.464368 kubelet[1561]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:35:29.464368 kubelet[1561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:35:29.464743 kubelet[1561]: I0209 18:35:29.464621 1561 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:35:29.466403 kubelet[1561]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:35:29.466403 kubelet[1561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:35:30.687454 kubelet[1561]: I0209 18:35:30.687414 1561 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:35:30.687454 kubelet[1561]: I0209 18:35:30.687443 1561 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:35:30.687783 kubelet[1561]: I0209 18:35:30.687646 1561 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:35:30.692118 kubelet[1561]: I0209 18:35:30.692090 1561 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:35:30.694440 kubelet[1561]: W0209 18:35:30.694413 1561 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:35:30.695201 kubelet[1561]: I0209 18:35:30.695182 1561 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:35:30.695603 kubelet[1561]: I0209 18:35:30.695582 1561 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:35:30.695672 kubelet[1561]: I0209 18:35:30.695661 1561 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:35:30.695749 kubelet[1561]: I0209 18:35:30.695741 1561 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:35:30.695781 kubelet[1561]: I0209 18:35:30.695753 1561 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:35:30.695934 kubelet[1561]: I0209 18:35:30.695923 1561 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:35:30.699666 kubelet[1561]: I0209 18:35:30.699633 1561 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:35:30.699666 kubelet[1561]: I0209 18:35:30.699664 1561 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:35:30.699759 kubelet[1561]: I0209 18:35:30.699750 1561 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:35:30.699781 kubelet[1561]: I0209 18:35:30.699761 1561 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:35:30.699923 kubelet[1561]: E0209 18:35:30.699906 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:30.699991 kubelet[1561]: E0209 18:35:30.699963 1561 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:30.700818 kubelet[1561]: I0209 18:35:30.700796 1561 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:35:30.701665 kubelet[1561]: W0209 18:35:30.701650 1561 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:35:30.702173 kubelet[1561]: I0209 18:35:30.702158 1561 server.go:1186] "Started kubelet" Feb 9 18:35:30.702361 kubelet[1561]: I0209 18:35:30.702346 1561 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:35:30.703023 kubelet[1561]: I0209 18:35:30.703002 1561 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:35:30.703921 kubelet[1561]: E0209 18:35:30.703892 1561 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:35:30.703921 kubelet[1561]: E0209 18:35:30.703917 1561 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:35:30.704000 audit[1561]: AVC avc: denied { mac_admin } for pid=1561 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:30.704000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:35:30.704000 audit[1561]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000ae7800 a1=400026bcb0 a2=4000ae77d0 a3=25 items=0 ppid=1 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.704000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:35:30.705429 kubelet[1561]: I0209 18:35:30.705413 1561 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 18:35:30.704000 audit[1561]: AVC avc: denied { mac_admin } for pid=1561 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:30.704000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:35:30.704000 audit[1561]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000d365a0 a1=400026bcc8 a2=4000ae7890 a3=25 items=0 ppid=1 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.704000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:35:30.705697 kubelet[1561]: I0209 18:35:30.705684 1561 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 18:35:30.705837 kubelet[1561]: I0209 18:35:30.705822 1561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:35:30.706482 kubelet[1561]: I0209 18:35:30.706029 1561 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:35:30.706482 kubelet[1561]: I0209 18:35:30.706096 1561 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:35:30.706482 kubelet[1561]: E0209 18:35:30.706123 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:30.718392 kubelet[1561]: E0209 18:35:30.717726 1561 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:35:30.718392 kubelet[1561]: W0209 18:35:30.717828 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:35:30.718392 kubelet[1561]: E0209 18:35:30.717849 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:35:30.718392 kubelet[1561]: W0209 18:35:30.717907 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:35:30.718392 kubelet[1561]: E0209 18:35:30.717917 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:35:30.718392 kubelet[1561]: W0209 18:35:30.718180 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:35:30.718392 kubelet[1561]: E0209 18:35:30.718215 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:35:30.718618 kubelet[1561]: E0209 18:35:30.718292 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598a9279e58", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 702130776, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 702130776, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.725357 kubelet[1561]: E0209 18:35:30.725065 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598a942c0ca", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 703909066, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 703909066, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.734000 audit[1578]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.734000 audit[1578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc4113960 a2=0 a3=1 items=0 ppid=1561 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 18:35:30.737901 kubelet[1561]: I0209 18:35:30.737822 1561 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:35:30.737000 audit[1580]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.737000 audit[1580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffffb7e12d0 a2=0 a3=1 items=0 ppid=1561 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 18:35:30.738149 kubelet[1561]: I0209 18:35:30.738133 1561 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:35:30.738218 kubelet[1561]: I0209 18:35:30.738207 1561 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:35:30.738446 kubelet[1561]: E0209 18:35:30.738347 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3dd2de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737140446, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737140446, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.740311 kubelet[1561]: I0209 18:35:30.740290 1561 policy_none.go:49] "None policy: Start" Feb 9 18:35:30.740463 kubelet[1561]: E0209 18:35:30.740257 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e028e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737152654, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737152654, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.741287 kubelet[1561]: I0209 18:35:30.741269 1561 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:35:30.741488 kubelet[1561]: I0209 18:35:30.741475 1561 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:35:30.742282 kubelet[1561]: E0209 18:35:30.742197 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e0f4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737155915, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737155915, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.753828 kubelet[1561]: I0209 18:35:30.753803 1561 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:35:30.753000 audit[1561]: AVC avc: denied { mac_admin } for pid=1561 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:35:30.753000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:35:30.753000 audit[1561]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cc26f0 a1=400107d7e8 a2=4000cc26c0 a3=25 items=0 ppid=1 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.753000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:35:30.754199 kubelet[1561]: I0209 18:35:30.754185 1561 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 18:35:30.754430 kubelet[1561]: I0209 18:35:30.754414 1561 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:35:30.755090 kubelet[1561]: E0209 18:35:30.755068 1561 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.94\" not found" Feb 9 18:35:30.739000 audit[1582]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.739000 audit[1582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe6371c10 a2=0 a3=1 items=0 ppid=1561 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 18:35:30.757052 kubelet[1561]: E0209 18:35:30.756977 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ac44743b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 754352187, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 754352187, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.757000 audit[1587]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.757000 audit[1587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc4a76650 a2=0 a3=1 items=0 ppid=1561 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 18:35:30.784000 audit[1592]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.784000 audit[1592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff4073580 a2=0 a3=1 items=0 ppid=1561 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 18:35:30.785000 audit[1593]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.785000 audit[1593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffebf65df0 a2=0 a3=1 items=0 ppid=1561 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 18:35:30.790000 audit[1596]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.790000 audit[1596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffebc89af0 a2=0 a3=1 items=0 ppid=1561 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.790000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 18:35:30.794000 audit[1599]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.794000 audit[1599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffd734b970 a2=0 a3=1 items=0 ppid=1561 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 18:35:30.795000 audit[1600]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.795000 audit[1600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd2a4a640 a2=0 a3=1 items=0 ppid=1561 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 18:35:30.796000 audit[1601]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.796000 audit[1601]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff119b010 a2=0 a3=1 items=0 ppid=1561 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 18:35:30.798000 audit[1603]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.798000 audit[1603]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc9c98900 a2=0 a3=1 items=0 ppid=1561 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 18:35:30.807559 kubelet[1561]: I0209 18:35:30.807521 1561 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.94" Feb 9 18:35:30.808497 kubelet[1561]: E0209 18:35:30.808461 1561 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.94" Feb 9 18:35:30.808970 kubelet[1561]: E0209 18:35:30.808884 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3dd2de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737140446, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 807485389, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3dd2de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.809853 kubelet[1561]: E0209 18:35:30.809783 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e028e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737152654, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 807490121, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e028e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.810926 kubelet[1561]: E0209 18:35:30.810840 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e0f4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737155915, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 807492945, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e0f4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:30.800000 audit[1605]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.800000 audit[1605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff645e860 a2=0 a3=1 items=0 ppid=1561 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 18:35:30.819000 audit[1608]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.819000 audit[1608]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffccf76ad0 a2=0 a3=1 items=0 ppid=1561 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 18:35:30.821000 audit[1610]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.821000 audit[1610]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffffb6af210 a2=0 a3=1 items=0 ppid=1561 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 18:35:30.828000 audit[1613]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.828000 audit[1613]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffff9448d30 a2=0 a3=1 items=0 ppid=1561 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 18:35:30.829343 kubelet[1561]: I0209 18:35:30.829325 1561 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:35:30.829000 audit[1614]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.829000 audit[1614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc535ab40 a2=0 a3=1 items=0 ppid=1561 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.829000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 18:35:30.829000 audit[1615]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.829000 audit[1615]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc18ac5d0 a2=0 a3=1 items=0 ppid=1561 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 18:35:30.830000 audit[1616]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.830000 audit[1616]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffefa990e0 a2=0 a3=1 items=0 ppid=1561 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 18:35:30.831000 audit[1617]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.831000 audit[1617]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf209a00 a2=0 a3=1 items=0 ppid=1561 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 18:35:30.831000 audit[1619]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:30.831000 audit[1619]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1d5ac80 a2=0 a3=1 items=0 ppid=1561 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 18:35:30.831000 audit[1620]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1620 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.831000 audit[1620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe9829df0 a2=0 a3=1 items=0 ppid=1561 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.831000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 18:35:30.832000 audit[1621]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1621 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.832000 audit[1621]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffc30753b0 a2=0 a3=1 items=0 ppid=1561 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.832000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 18:35:30.834000 audit[1623]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.834000 audit[1623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffe9d118b0 a2=0 a3=1 items=0 ppid=1561 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 18:35:30.835000 audit[1624]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1624 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.835000 audit[1624]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeb0bc0b0 a2=0 a3=1 items=0 ppid=1561 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 18:35:30.835000 audit[1625]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.835000 audit[1625]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc2795c0 a2=0 a3=1 items=0 ppid=1561 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 18:35:30.837000 audit[1627]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1627 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.837000 audit[1627]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdddd12f0 a2=0 a3=1 items=0 ppid=1561 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.837000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 18:35:30.839000 audit[1629]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.839000 audit[1629]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc45c3570 a2=0 a3=1 items=0 ppid=1561 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.839000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 18:35:30.842000 audit[1631]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.842000 audit[1631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffff4e37830 a2=0 a3=1 items=0 ppid=1561 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 18:35:30.844000 audit[1633]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.844000 audit[1633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffffdb8cd10 a2=0 a3=1 items=0 ppid=1561 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 18:35:30.847000 audit[1635]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.847000 audit[1635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffc2c68890 a2=0 a3=1 items=0 ppid=1561 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 18:35:30.849468 kubelet[1561]: I0209 18:35:30.849436 1561 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:35:30.849468 kubelet[1561]: I0209 18:35:30.849462 1561 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:35:30.849543 kubelet[1561]: I0209 18:35:30.849478 1561 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:35:30.849543 kubelet[1561]: E0209 18:35:30.849529 1561 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:35:30.848000 audit[1636]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.848000 audit[1636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeeed4870 a2=0 a3=1 items=0 ppid=1561 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 18:35:30.850940 kubelet[1561]: W0209 18:35:30.850808 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:35:30.850940 kubelet[1561]: E0209 18:35:30.850833 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:35:30.849000 audit[1637]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.849000 audit[1637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa16fc80 a2=0 a3=1 items=0 ppid=1561 pid=1637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 18:35:30.850000 audit[1638]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:30.850000 audit[1638]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3ed5ff0 a2=0 a3=1 items=0 ppid=1561 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 18:35:30.918950 kubelet[1561]: E0209 18:35:30.918903 1561 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:35:31.010271 kubelet[1561]: I0209 18:35:31.010178 1561 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.94" Feb 9 18:35:31.011906 kubelet[1561]: E0209 18:35:31.011875 1561 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.94" Feb 9 18:35:31.012197 kubelet[1561]: E0209 18:35:31.011990 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3dd2de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737140446, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 31, 10124870, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3dd2de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:31.013379 kubelet[1561]: E0209 18:35:31.013322 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e028e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737152654, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 31, 10146121, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e028e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:31.104699 kubelet[1561]: E0209 18:35:31.104622 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e0f4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737155915, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 31, 10151652, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e0f4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:31.320875 kubelet[1561]: E0209 18:35:31.320764 1561 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:35:31.412659 kubelet[1561]: I0209 18:35:31.412635 1561 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.94" Feb 9 18:35:31.413797 kubelet[1561]: E0209 18:35:31.413776 1561 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.94" Feb 9 18:35:31.414151 kubelet[1561]: E0209 18:35:31.414083 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3dd2de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737140446, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 31, 412602660, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3dd2de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:31.504293 kubelet[1561]: E0209 18:35:31.504202 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e028e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737152654, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 31, 412607356, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e028e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:31.700278 kubelet[1561]: E0209 18:35:31.700154 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:31.705358 kubelet[1561]: E0209 18:35:31.705267 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e0f4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737155915, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 31, 412610221, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e0f4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:31.891349 kubelet[1561]: W0209 18:35:31.891314 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:35:31.891349 kubelet[1561]: E0209 18:35:31.891348 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:35:32.122465 kubelet[1561]: E0209 18:35:32.122363 1561 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:35:32.136258 kubelet[1561]: W0209 18:35:32.136237 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:35:32.136363 kubelet[1561]: E0209 18:35:32.136352 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:35:32.145399 kubelet[1561]: W0209 18:35:32.145379 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:35:32.145498 kubelet[1561]: E0209 18:35:32.145486 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:35:32.215405 kubelet[1561]: I0209 18:35:32.215369 1561 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.94" Feb 9 18:35:32.216710 kubelet[1561]: E0209 18:35:32.216687 1561 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.94" Feb 9 18:35:32.216806 kubelet[1561]: E0209 18:35:32.216703 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3dd2de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737140446, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 32, 215322370, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3dd2de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:32.217761 kubelet[1561]: E0209 18:35:32.217699 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e028e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737152654, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 32, 215334037, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e028e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:32.305318 kubelet[1561]: E0209 18:35:32.305214 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e0f4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737155915, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 32, 215338298, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e0f4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:32.388766 kubelet[1561]: W0209 18:35:32.388670 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:35:32.388766 kubelet[1561]: E0209 18:35:32.388701 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:35:32.700972 kubelet[1561]: E0209 18:35:32.700845 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:33.514073 kubelet[1561]: W0209 18:35:33.514034 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:35:33.514073 kubelet[1561]: E0209 18:35:33.514071 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:35:33.701745 kubelet[1561]: E0209 18:35:33.701707 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:33.724028 kubelet[1561]: E0209 18:35:33.723994 1561 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:35:33.818077 kubelet[1561]: I0209 18:35:33.817979 1561 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.94" Feb 9 18:35:33.819408 kubelet[1561]: E0209 18:35:33.819377 1561 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.94" Feb 9 18:35:33.819526 kubelet[1561]: E0209 18:35:33.819376 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3dd2de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737140446, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 33, 817939386, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3dd2de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:33.820386 kubelet[1561]: E0209 18:35:33.820327 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e028e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737152654, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 33, 817952136, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e028e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:33.821079 kubelet[1561]: E0209 18:35:33.821026 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e0f4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737155915, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 33, 817955363, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e0f4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:34.211729 kubelet[1561]: W0209 18:35:34.211636 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:35:34.211909 kubelet[1561]: E0209 18:35:34.211884 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:35:34.552154 kubelet[1561]: W0209 18:35:34.552059 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:35:34.552298 kubelet[1561]: E0209 18:35:34.552286 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 18:35:34.702639 kubelet[1561]: E0209 18:35:34.702601 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:34.705046 kubelet[1561]: W0209 18:35:34.705016 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:35:34.705046 kubelet[1561]: E0209 18:35:34.705041 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:35:35.703060 kubelet[1561]: E0209 18:35:35.703014 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:36.703732 kubelet[1561]: E0209 18:35:36.703654 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:36.927032 kubelet[1561]: E0209 18:35:36.926980 1561 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.94" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 18:35:36.972461 kubelet[1561]: W0209 18:35:36.972266 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:35:36.972461 kubelet[1561]: E0209 18:35:36.972294 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 18:35:37.020390 kubelet[1561]: I0209 18:35:37.020352 1561 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.94" Feb 9 18:35:37.021332 kubelet[1561]: E0209 18:35:37.021305 1561 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.94" Feb 9 18:35:37.021643 kubelet[1561]: E0209 18:35:37.021556 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3dd2de", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.94 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737140446, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 37, 20300067, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3dd2de" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:37.022592 kubelet[1561]: E0209 18:35:37.022528 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e028e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.94 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737152654, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 37, 20312639, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e028e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:37.023546 kubelet[1561]: E0209 18:35:37.023480 1561 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94.17b24598ab3e0f4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.94", UID:"10.0.0.94", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.94 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.94"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 35, 30, 737155915, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 35, 37, 20325130, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.94.17b24598ab3e0f4b" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 18:35:37.703988 kubelet[1561]: E0209 18:35:37.703949 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:38.704106 kubelet[1561]: E0209 18:35:38.704067 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:39.194202 kubelet[1561]: W0209 18:35:39.194175 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:35:39.194202 kubelet[1561]: E0209 18:35:39.194204 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 18:35:39.690348 kubelet[1561]: W0209 18:35:39.690317 1561 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:35:39.690348 kubelet[1561]: E0209 18:35:39.690355 1561 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 18:35:39.704631 kubelet[1561]: E0209 18:35:39.704599 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:40.690383 kubelet[1561]: I0209 18:35:40.690346 1561 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 18:35:40.704740 kubelet[1561]: E0209 18:35:40.704717 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:40.755757 kubelet[1561]: E0209 18:35:40.755713 1561 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.94\" not found" Feb 9 18:35:41.063918 kubelet[1561]: E0209 18:35:41.063877 1561 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.94" not found Feb 9 18:35:41.706035 kubelet[1561]: E0209 18:35:41.705997 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:42.320707 kubelet[1561]: E0209 18:35:42.320660 1561 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.94" not found Feb 9 18:35:42.707011 kubelet[1561]: E0209 18:35:42.706797 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:43.338837 kubelet[1561]: E0209 18:35:43.338800 1561 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.94\" not found" node="10.0.0.94" Feb 9 18:35:43.422989 kubelet[1561]: I0209 18:35:43.422967 1561 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.94" Feb 9 18:35:43.707754 kubelet[1561]: E0209 18:35:43.707514 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:43.722106 kubelet[1561]: I0209 18:35:43.722064 1561 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.94" Feb 9 18:35:43.731224 kubelet[1561]: E0209 18:35:43.731196 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:43.831958 kubelet[1561]: E0209 18:35:43.831923 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:43.933129 kubelet[1561]: E0209 18:35:43.933066 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:43.941384 kernel: kauditd_printk_skb: 130 callbacks suppressed Feb 9 18:35:43.941464 kernel: audit: type=1106 audit(1707503743.939:189): pid=1370 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:43.939000 audit[1370]: USER_END pid=1370 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:43.940555 sudo[1370]: pam_unix(sudo:session): session closed for user root Feb 9 18:35:43.939000 audit[1370]: CRED_DISP pid=1370 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:43.944381 sshd[1364]: pam_unix(sshd:session): session closed for user core Feb 9 18:35:43.945470 kernel: audit: type=1104 audit(1707503743.939:190): pid=1370 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:35:43.945000 audit[1364]: USER_END pid=1364 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:43.948345 kernel: audit: type=1106 audit(1707503743.945:191): pid=1364 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:43.948643 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:55652.service: Deactivated successfully. Feb 9 18:35:43.945000 audit[1364]: CRED_DISP pid=1364 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:43.949903 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:35:43.949958 systemd-logind[1202]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:35:43.951172 kernel: audit: type=1104 audit(1707503743.945:192): pid=1364 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 18:35:43.951224 kernel: audit: type=1131 audit(1707503743.948:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-10.0.0.1:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:43.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.94:22-10.0.0.1:55652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:35:43.951369 systemd-logind[1202]: Removed session 7. Feb 9 18:35:44.033843 kubelet[1561]: E0209 18:35:44.033803 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.134425 kubelet[1561]: E0209 18:35:44.134389 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.234962 kubelet[1561]: E0209 18:35:44.234913 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.335794 kubelet[1561]: E0209 18:35:44.335697 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.436397 kubelet[1561]: E0209 18:35:44.436355 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.536815 kubelet[1561]: E0209 18:35:44.536783 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.637230 kubelet[1561]: E0209 18:35:44.637141 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.707929 kubelet[1561]: E0209 18:35:44.707904 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:44.738056 kubelet[1561]: E0209 18:35:44.738035 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.838322 kubelet[1561]: E0209 18:35:44.838287 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:44.938825 kubelet[1561]: E0209 18:35:44.938723 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.039287 kubelet[1561]: E0209 18:35:45.039247 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.139758 kubelet[1561]: E0209 18:35:45.139719 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.240442 kubelet[1561]: E0209 18:35:45.240330 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.340883 kubelet[1561]: E0209 18:35:45.340839 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.441317 kubelet[1561]: E0209 18:35:45.441276 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.541830 kubelet[1561]: E0209 18:35:45.541794 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.642341 kubelet[1561]: E0209 18:35:45.642303 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.708258 kubelet[1561]: E0209 18:35:45.708226 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:45.742831 kubelet[1561]: E0209 18:35:45.742798 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.843363 kubelet[1561]: E0209 18:35:45.843256 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:45.943880 kubelet[1561]: E0209 18:35:45.943830 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.044383 kubelet[1561]: E0209 18:35:46.044332 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.145101 kubelet[1561]: E0209 18:35:46.144991 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.245682 kubelet[1561]: E0209 18:35:46.245641 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.346253 kubelet[1561]: E0209 18:35:46.346199 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.446585 kubelet[1561]: E0209 18:35:46.446482 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.546977 kubelet[1561]: E0209 18:35:46.546950 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.648074 kubelet[1561]: E0209 18:35:46.648029 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.708816 kubelet[1561]: E0209 18:35:46.708736 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:46.748922 kubelet[1561]: E0209 18:35:46.748890 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.849499 kubelet[1561]: E0209 18:35:46.849464 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:46.949832 kubelet[1561]: E0209 18:35:46.949794 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:47.050296 kubelet[1561]: E0209 18:35:47.050251 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:47.150696 kubelet[1561]: E0209 18:35:47.150662 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:47.251261 kubelet[1561]: E0209 18:35:47.251226 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:47.351755 kubelet[1561]: E0209 18:35:47.351653 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:47.452466 kubelet[1561]: E0209 18:35:47.452426 1561 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Feb 9 18:35:47.553095 kubelet[1561]: I0209 18:35:47.553064 1561 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 18:35:47.553400 env[1218]: time="2024-02-09T18:35:47.553349624Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:35:47.553655 kubelet[1561]: I0209 18:35:47.553522 1561 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 18:35:47.708318 kubelet[1561]: I0209 18:35:47.707737 1561 apiserver.go:52] "Watching apiserver" Feb 9 18:35:47.708858 kubelet[1561]: E0209 18:35:47.708829 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:47.710685 kubelet[1561]: I0209 18:35:47.710659 1561 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:47.710745 kubelet[1561]: I0209 18:35:47.710731 1561 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:47.710777 kubelet[1561]: I0209 18:35:47.710772 1561 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:35:47.711202 kubelet[1561]: E0209 18:35:47.711181 1561 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gm2m" podUID=8d972391-7603-4ba7-9c5f-70ab2777349a Feb 9 18:35:47.807421 kubelet[1561]: I0209 18:35:47.807380 1561 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:35:47.883809 kubelet[1561]: I0209 18:35:47.883787 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/650607dd-ab29-464c-97ca-372568502f70-xtables-lock\") pod \"kube-proxy-rmvwn\" (UID: \"650607dd-ab29-464c-97ca-372568502f70\") " pod="kube-system/kube-proxy-rmvwn" Feb 9 18:35:47.883912 kubelet[1561]: I0209 18:35:47.883824 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8d972391-7603-4ba7-9c5f-70ab2777349a-registration-dir\") pod \"csi-node-driver-6gm2m\" (UID: \"8d972391-7603-4ba7-9c5f-70ab2777349a\") " pod="calico-system/csi-node-driver-6gm2m" Feb 9 18:35:47.883912 kubelet[1561]: I0209 18:35:47.883847 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7479\" (UniqueName: \"kubernetes.io/projected/8d972391-7603-4ba7-9c5f-70ab2777349a-kube-api-access-k7479\") pod \"csi-node-driver-6gm2m\" (UID: \"8d972391-7603-4ba7-9c5f-70ab2777349a\") " pod="calico-system/csi-node-driver-6gm2m" Feb 9 18:35:47.883912 kubelet[1561]: I0209 18:35:47.883888 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-var-run-calico\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.883912 kubelet[1561]: I0209 18:35:47.883910 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-cni-log-dir\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884024 kubelet[1561]: I0209 18:35:47.883932 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8d972391-7603-4ba7-9c5f-70ab2777349a-socket-dir\") pod \"csi-node-driver-6gm2m\" (UID: \"8d972391-7603-4ba7-9c5f-70ab2777349a\") " pod="calico-system/csi-node-driver-6gm2m" Feb 9 18:35:47.884024 kubelet[1561]: I0209 18:35:47.883975 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-lib-modules\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884067 kubelet[1561]: I0209 18:35:47.884035 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-var-lib-calico\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884091 kubelet[1561]: I0209 18:35:47.884068 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/650607dd-ab29-464c-97ca-372568502f70-lib-modules\") pod \"kube-proxy-rmvwn\" (UID: \"650607dd-ab29-464c-97ca-372568502f70\") " pod="kube-system/kube-proxy-rmvwn" Feb 9 18:35:47.884113 kubelet[1561]: I0209 18:35:47.884102 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq4lw\" (UniqueName: \"kubernetes.io/projected/650607dd-ab29-464c-97ca-372568502f70-kube-api-access-gq4lw\") pod \"kube-proxy-rmvwn\" (UID: \"650607dd-ab29-464c-97ca-372568502f70\") " pod="kube-system/kube-proxy-rmvwn" Feb 9 18:35:47.884138 kubelet[1561]: I0209 18:35:47.884125 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8d972391-7603-4ba7-9c5f-70ab2777349a-varrun\") pod \"csi-node-driver-6gm2m\" (UID: \"8d972391-7603-4ba7-9c5f-70ab2777349a\") " pod="calico-system/csi-node-driver-6gm2m" Feb 9 18:35:47.884161 kubelet[1561]: I0209 18:35:47.884147 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d972391-7603-4ba7-9c5f-70ab2777349a-kubelet-dir\") pod \"csi-node-driver-6gm2m\" (UID: \"8d972391-7603-4ba7-9c5f-70ab2777349a\") " pod="calico-system/csi-node-driver-6gm2m" Feb 9 18:35:47.884183 kubelet[1561]: I0209 18:35:47.884168 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-policysync\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884204 kubelet[1561]: I0209 18:35:47.884198 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-cni-net-dir\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884226 kubelet[1561]: I0209 18:35:47.884218 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/650607dd-ab29-464c-97ca-372568502f70-kube-proxy\") pod \"kube-proxy-rmvwn\" (UID: \"650607dd-ab29-464c-97ca-372568502f70\") " pod="kube-system/kube-proxy-rmvwn" Feb 9 18:35:47.884251 kubelet[1561]: I0209 18:35:47.884236 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-xtables-lock\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884273 kubelet[1561]: I0209 18:35:47.884256 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98a6994b-5faa-4e05-b66c-96f857041f51-tigera-ca-bundle\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884300 kubelet[1561]: I0209 18:35:47.884275 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/98a6994b-5faa-4e05-b66c-96f857041f51-node-certs\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884322 kubelet[1561]: I0209 18:35:47.884300 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-cni-bin-dir\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884343 kubelet[1561]: I0209 18:35:47.884323 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/98a6994b-5faa-4e05-b66c-96f857041f51-flexvol-driver-host\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884368 kubelet[1561]: I0209 18:35:47.884347 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzshq\" (UniqueName: \"kubernetes.io/projected/98a6994b-5faa-4e05-b66c-96f857041f51-kube-api-access-gzshq\") pod \"calico-node-br586\" (UID: \"98a6994b-5faa-4e05-b66c-96f857041f51\") " pod="calico-system/calico-node-br586" Feb 9 18:35:47.884390 kubelet[1561]: I0209 18:35:47.884373 1561 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:35:47.986736 kubelet[1561]: E0209 18:35:47.986661 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:47.986736 kubelet[1561]: W0209 18:35:47.986680 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:47.986736 kubelet[1561]: E0209 18:35:47.986704 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:47.987360 kubelet[1561]: E0209 18:35:47.987346 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:47.987436 kubelet[1561]: W0209 18:35:47.987422 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:47.987505 kubelet[1561]: E0209 18:35:47.987495 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:47.989006 kubelet[1561]: E0209 18:35:47.988989 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:47.989006 kubelet[1561]: W0209 18:35:47.989005 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:47.989081 kubelet[1561]: E0209 18:35:47.989021 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.086693 kubelet[1561]: E0209 18:35:48.086669 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.086794 kubelet[1561]: W0209 18:35:48.086781 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.086892 kubelet[1561]: E0209 18:35:48.086878 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.087122 kubelet[1561]: E0209 18:35:48.087111 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.087197 kubelet[1561]: W0209 18:35:48.087185 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.087260 kubelet[1561]: E0209 18:35:48.087243 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.087536 kubelet[1561]: E0209 18:35:48.087525 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.087619 kubelet[1561]: W0209 18:35:48.087608 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.087682 kubelet[1561]: E0209 18:35:48.087666 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.188727 kubelet[1561]: E0209 18:35:48.188703 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.188850 kubelet[1561]: W0209 18:35:48.188835 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.188940 kubelet[1561]: E0209 18:35:48.188929 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.189166 kubelet[1561]: E0209 18:35:48.189154 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.189263 kubelet[1561]: W0209 18:35:48.189250 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.189329 kubelet[1561]: E0209 18:35:48.189320 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.189574 kubelet[1561]: E0209 18:35:48.189563 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.189651 kubelet[1561]: W0209 18:35:48.189641 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.189710 kubelet[1561]: E0209 18:35:48.189696 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.290207 kubelet[1561]: E0209 18:35:48.290183 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.290372 kubelet[1561]: W0209 18:35:48.290357 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.290436 kubelet[1561]: E0209 18:35:48.290425 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.290707 kubelet[1561]: E0209 18:35:48.290694 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.290789 kubelet[1561]: W0209 18:35:48.290776 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.290845 kubelet[1561]: E0209 18:35:48.290836 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.291082 kubelet[1561]: E0209 18:35:48.291069 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.291166 kubelet[1561]: W0209 18:35:48.291155 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.291222 kubelet[1561]: E0209 18:35:48.291212 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.392416 kubelet[1561]: E0209 18:35:48.392390 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.392416 kubelet[1561]: W0209 18:35:48.392409 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.392416 kubelet[1561]: E0209 18:35:48.392428 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.392632 kubelet[1561]: E0209 18:35:48.392622 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.392632 kubelet[1561]: W0209 18:35:48.392633 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.392731 kubelet[1561]: E0209 18:35:48.392644 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.392822 kubelet[1561]: E0209 18:35:48.392809 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.392822 kubelet[1561]: W0209 18:35:48.392818 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.392893 kubelet[1561]: E0209 18:35:48.392828 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.493661 kubelet[1561]: E0209 18:35:48.493615 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.493661 kubelet[1561]: W0209 18:35:48.493640 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.493661 kubelet[1561]: E0209 18:35:48.493658 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.493890 kubelet[1561]: E0209 18:35:48.493858 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.493890 kubelet[1561]: W0209 18:35:48.493885 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.493945 kubelet[1561]: E0209 18:35:48.493896 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.494076 kubelet[1561]: E0209 18:35:48.494053 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.494076 kubelet[1561]: W0209 18:35:48.494064 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.494076 kubelet[1561]: E0209 18:35:48.494074 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.520111 kubelet[1561]: E0209 18:35:48.520092 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.520205 kubelet[1561]: W0209 18:35:48.520192 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.520270 kubelet[1561]: E0209 18:35:48.520253 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.595535 kubelet[1561]: E0209 18:35:48.595451 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.595679 kubelet[1561]: W0209 18:35:48.595662 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.595749 kubelet[1561]: E0209 18:35:48.595739 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.596101 kubelet[1561]: E0209 18:35:48.596089 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.596180 kubelet[1561]: W0209 18:35:48.596169 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.596240 kubelet[1561]: E0209 18:35:48.596225 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.619406 kubelet[1561]: E0209 18:35:48.619375 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:48.620192 env[1218]: time="2024-02-09T18:35:48.620154040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmvwn,Uid:650607dd-ab29-464c-97ca-372568502f70,Namespace:kube-system,Attempt:0,}" Feb 9 18:35:48.697517 kubelet[1561]: E0209 18:35:48.697472 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.697517 kubelet[1561]: W0209 18:35:48.697493 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.697517 kubelet[1561]: E0209 18:35:48.697512 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.697754 kubelet[1561]: E0209 18:35:48.697717 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.697754 kubelet[1561]: W0209 18:35:48.697731 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.697754 kubelet[1561]: E0209 18:35:48.697743 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.708967 kubelet[1561]: E0209 18:35:48.708938 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:48.721399 kubelet[1561]: E0209 18:35:48.721381 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.721527 kubelet[1561]: W0209 18:35:48.721502 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.721609 kubelet[1561]: E0209 18:35:48.721597 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.798754 kubelet[1561]: E0209 18:35:48.798715 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.798754 kubelet[1561]: W0209 18:35:48.798736 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.798754 kubelet[1561]: E0209 18:35:48.798764 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.899899 kubelet[1561]: E0209 18:35:48.899792 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.900033 kubelet[1561]: W0209 18:35:48.900016 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.900112 kubelet[1561]: E0209 18:35:48.900101 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:48.921233 kubelet[1561]: E0209 18:35:48.921204 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:48.921233 kubelet[1561]: W0209 18:35:48.921223 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:48.921352 kubelet[1561]: E0209 18:35:48.921244 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:49.120954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866426528.mount: Deactivated successfully. Feb 9 18:35:49.124345 env[1218]: time="2024-02-09T18:35:49.124298870Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:49.125644 env[1218]: time="2024-02-09T18:35:49.125616020Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:49.127589 env[1218]: time="2024-02-09T18:35:49.127557121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:49.129067 env[1218]: time="2024-02-09T18:35:49.129039794Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:49.154938 env[1218]: time="2024-02-09T18:35:49.154795624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:49.154938 env[1218]: time="2024-02-09T18:35:49.154837845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:49.155242 env[1218]: time="2024-02-09T18:35:49.154853397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:49.155411 env[1218]: time="2024-02-09T18:35:49.155373557Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df8edfb3d829ad300d99aaffb9fcc52df794b98bd83b49903baa6f3beece46ab pid=1685 runtime=io.containerd.runc.v2 Feb 9 18:35:49.210436 env[1218]: time="2024-02-09T18:35:49.210386275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rmvwn,Uid:650607dd-ab29-464c-97ca-372568502f70,Namespace:kube-system,Attempt:0,} returns sandbox id \"df8edfb3d829ad300d99aaffb9fcc52df794b98bd83b49903baa6f3beece46ab\"" Feb 9 18:35:49.211224 kubelet[1561]: E0209 18:35:49.211200 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:49.212431 env[1218]: time="2024-02-09T18:35:49.212379312Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:35:49.213682 kubelet[1561]: E0209 18:35:49.213503 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:49.213833 env[1218]: time="2024-02-09T18:35:49.213790499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-br586,Uid:98a6994b-5faa-4e05-b66c-96f857041f51,Namespace:calico-system,Attempt:0,}" Feb 9 18:35:49.227108 env[1218]: time="2024-02-09T18:35:49.227044879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:49.227108 env[1218]: time="2024-02-09T18:35:49.227081902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:49.227108 env[1218]: time="2024-02-09T18:35:49.227092097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:49.227268 env[1218]: time="2024-02-09T18:35:49.227208164Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b839b9a977632c37387570374949f6b9c6fdac3e1364fc6500062162346adac pid=1727 runtime=io.containerd.runc.v2 Feb 9 18:35:49.263393 env[1218]: time="2024-02-09T18:35:49.263351143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-br586,Uid:98a6994b-5faa-4e05-b66c-96f857041f51,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b839b9a977632c37387570374949f6b9c6fdac3e1364fc6500062162346adac\"" Feb 9 18:35:49.263921 kubelet[1561]: E0209 18:35:49.263903 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:49.710414 kubelet[1561]: E0209 18:35:49.710365 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:49.850076 kubelet[1561]: E0209 18:35:49.849988 1561 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gm2m" podUID=8d972391-7603-4ba7-9c5f-70ab2777349a Feb 9 18:35:50.207793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800004061.mount: Deactivated successfully. Feb 9 18:35:50.535702 env[1218]: time="2024-02-09T18:35:50.535650564Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:50.536776 env[1218]: time="2024-02-09T18:35:50.536729127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:50.538310 env[1218]: time="2024-02-09T18:35:50.538273181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:50.539535 env[1218]: time="2024-02-09T18:35:50.539510639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:50.540091 env[1218]: time="2024-02-09T18:35:50.540062656Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:35:50.541042 env[1218]: time="2024-02-09T18:35:50.541011391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 18:35:50.541995 env[1218]: time="2024-02-09T18:35:50.541957608Z" level=info msg="CreateContainer within sandbox \"df8edfb3d829ad300d99aaffb9fcc52df794b98bd83b49903baa6f3beece46ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:35:50.552058 env[1218]: time="2024-02-09T18:35:50.552017371Z" level=info msg="CreateContainer within sandbox \"df8edfb3d829ad300d99aaffb9fcc52df794b98bd83b49903baa6f3beece46ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b7b0d9cf7a4ff7c28942d96b733e7397376c681700d47ce72618d07c0cd8a78a\"" Feb 9 18:35:50.552560 env[1218]: time="2024-02-09T18:35:50.552532962Z" level=info msg="StartContainer for \"b7b0d9cf7a4ff7c28942d96b733e7397376c681700d47ce72618d07c0cd8a78a\"" Feb 9 18:35:50.604071 env[1218]: time="2024-02-09T18:35:50.604029852Z" level=info msg="StartContainer for \"b7b0d9cf7a4ff7c28942d96b733e7397376c681700d47ce72618d07c0cd8a78a\" returns successfully" Feb 9 18:35:50.697000 audit[1818]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.699820 kubelet[1561]: E0209 18:35:50.699795 1561 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:50.697000 audit[1818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd50beac0 a2=0 a3=ffffb13496c0 items=0 ppid=1778 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.703215 kernel: audit: type=1325 audit(1707503750.697:194): table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.703278 kernel: audit: type=1300 audit(1707503750.697:194): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd50beac0 a2=0 a3=ffffb13496c0 items=0 ppid=1778 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.703310 kernel: audit: type=1325 audit(1707503750.697:195): table=mangle:36 family=2 entries=1 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.697000 audit[1817]: NETFILTER_CFG table=mangle:36 family=2 entries=1 op=nft_register_chain pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.697000 audit[1817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec812270 a2=0 a3=ffff9e7806c0 items=0 ppid=1778 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.707216 kernel: audit: type=1300 audit(1707503750.697:195): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec812270 a2=0 a3=ffff9e7806c0 items=0 ppid=1778 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.707273 kernel: audit: type=1327 audit(1707503750.697:195): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:35:50.697000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:35:50.708506 kernel: audit: type=1325 audit(1707503750.701:196): table=nat:37 family=2 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.701000 audit[1819]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.701000 audit[1819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6b78ca0 a2=0 a3=ffffbb7f36c0 items=0 ppid=1778 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.711007 kubelet[1561]: E0209 18:35:50.710978 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:50.712568 kernel: audit: type=1300 audit(1707503750.701:196): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6b78ca0 a2=0 a3=ffffbb7f36c0 items=0 ppid=1778 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.712597 kernel: audit: type=1327 audit(1707503750.701:196): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:35:50.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:35:50.713834 kernel: audit: type=1327 audit(1707503750.697:194): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:35:50.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:35:50.702000 audit[1820]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=1820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.716444 kernel: audit: type=1325 audit(1707503750.702:197): table=filter:38 family=2 entries=1 op=nft_register_chain pid=1820 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.702000 audit[1820]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee57bfa0 a2=0 a3=ffffa49806c0 items=0 ppid=1778 pid=1820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 18:35:50.705000 audit[1821]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=1821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.705000 audit[1821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffccaeea70 a2=0 a3=ffffafe036c0 items=0 ppid=1778 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:35:50.705000 audit[1822]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1822 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.705000 audit[1822]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe79fcd00 a2=0 a3=ffffa8e576c0 items=0 ppid=1778 pid=1822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 18:35:50.799000 audit[1823]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.799000 audit[1823]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd1503320 a2=0 a3=ffff9fa096c0 items=0 ppid=1778 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 18:35:50.802000 audit[1825]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.802000 audit[1825]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc9468640 a2=0 a3=ffff865346c0 items=0 ppid=1778 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 18:35:50.806000 audit[1828]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1828 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.806000 audit[1828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe106bfc0 a2=0 a3=ffff8a4b86c0 items=0 ppid=1778 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 18:35:50.807000 audit[1829]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.807000 audit[1829]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe5481660 a2=0 a3=ffff909246c0 items=0 ppid=1778 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 18:35:50.809000 audit[1831]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.809000 audit[1831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe5a50a20 a2=0 a3=ffff87a886c0 items=0 ppid=1778 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 18:35:50.810000 audit[1832]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1832 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.810000 audit[1832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa790050 a2=0 a3=ffff949bc6c0 items=0 ppid=1778 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 18:35:50.812000 audit[1834]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1834 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.812000 audit[1834]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff7385100 a2=0 a3=ffffaa01a6c0 items=0 ppid=1778 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 18:35:50.815000 audit[1837]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1837 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.815000 audit[1837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd46d5760 a2=0 a3=ffffb9dec6c0 items=0 ppid=1778 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 18:35:50.816000 audit[1838]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.816000 audit[1838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4c74ce0 a2=0 a3=ffffbd42b6c0 items=0 ppid=1778 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 18:35:50.818000 audit[1840]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.818000 audit[1840]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff44ae000 a2=0 a3=ffffbdb626c0 items=0 ppid=1778 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 18:35:50.819000 audit[1841]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.819000 audit[1841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa55efc0 a2=0 a3=ffffb9cf66c0 items=0 ppid=1778 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 18:35:50.821000 audit[1843]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.821000 audit[1843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc59bd410 a2=0 a3=ffff82e356c0 items=0 ppid=1778 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:35:50.824000 audit[1846]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1846 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.824000 audit[1846]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff3a83c90 a2=0 a3=ffff9a7716c0 items=0 ppid=1778 pid=1846 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:35:50.828000 audit[1849]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.828000 audit[1849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd8691350 a2=0 a3=ffff806cb6c0 items=0 ppid=1778 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 18:35:50.829000 audit[1850]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.829000 audit[1850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc190a720 a2=0 a3=ffffb8d246c0 items=0 ppid=1778 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 18:35:50.831000 audit[1852]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.831000 audit[1852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe1f67350 a2=0 a3=ffffa58466c0 items=0 ppid=1778 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:35:50.833000 audit[1855]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:35:50.833000 audit[1855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffff76c59f0 a2=0 a3=ffffb3c126c0 items=0 ppid=1778 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:35:50.842000 audit[1859]: NETFILTER_CFG table=filter:58 family=2 entries=4 op=nft_register_rule pid=1859 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:50.842000 audit[1859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffd35be570 a2=0 a3=ffffa95d16c0 items=0 ppid=1778 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:50.851000 audit[1859]: NETFILTER_CFG table=nat:59 family=2 entries=50 op=nft_register_chain pid=1859 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:50.851000 audit[1859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21492 a0=3 a1=ffffd35be570 a2=0 a3=ffffa95d16c0 items=0 ppid=1778 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:50.879210 kubelet[1561]: E0209 18:35:50.879157 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:50.889064 kubelet[1561]: I0209 18:35:50.888644 1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rmvwn" podStartSLOduration=-9.223372028966175e+09 pod.CreationTimestamp="2024-02-09 18:35:43 +0000 UTC" firstStartedPulling="2024-02-09 18:35:49.211857194 +0000 UTC m=+19.787541326" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:50.888318599 +0000 UTC m=+21.464002811" watchObservedRunningTime="2024-02-09 18:35:50.888600725 +0000 UTC m=+21.464284857" Feb 9 18:35:50.899000 audit[1893]: NETFILTER_CFG table=filter:60 family=2 entries=8 op=nft_register_rule pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:50.899000 audit[1893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffdda4d2c0 a2=0 a3=ffffbf1936c0 items=0 ppid=1778 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:50.900000 audit[1893]: NETFILTER_CFG table=nat:61 family=2 entries=68 op=nft_register_rule pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:50.900000 audit[1893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21492 a0=3 a1=ffffdda4d2c0 a2=0 a3=ffffbf1936c0 items=0 ppid=1778 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.900000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:50.909000 audit[1894]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.909000 audit[1894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff82e06e0 a2=0 a3=ffffa274f6c0 items=0 ppid=1778 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 18:35:50.911000 audit[1896]: NETFILTER_CFG table=filter:63 family=10 entries=2 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.911000 audit[1896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe3cfe4a0 a2=0 a3=ffffaf0b06c0 items=0 ppid=1778 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 18:35:50.922000 audit[1899]: NETFILTER_CFG table=filter:64 family=10 entries=2 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.922000 audit[1899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffeaa76bb0 a2=0 a3=ffff8f1836c0 items=0 ppid=1778 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 18:35:50.922000 audit[1900]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.922000 audit[1900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6f56f00 a2=0 a3=ffffa24156c0 items=0 ppid=1778 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 18:35:50.924820 kubelet[1561]: E0209 18:35:50.924792 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.924820 kubelet[1561]: W0209 18:35:50.924810 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.924922 kubelet[1561]: E0209 18:35:50.924830 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.925022 kubelet[1561]: E0209 18:35:50.925006 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.925022 kubelet[1561]: W0209 18:35:50.925016 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.925085 kubelet[1561]: E0209 18:35:50.925027 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.925232 kubelet[1561]: E0209 18:35:50.925211 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.925232 kubelet[1561]: W0209 18:35:50.925224 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.925292 kubelet[1561]: E0209 18:35:50.925242 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.925418 kubelet[1561]: E0209 18:35:50.925401 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.925418 kubelet[1561]: W0209 18:35:50.925417 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.925473 kubelet[1561]: E0209 18:35:50.925428 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.925571 kubelet[1561]: E0209 18:35:50.925560 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.925571 kubelet[1561]: W0209 18:35:50.925570 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.925620 kubelet[1561]: E0209 18:35:50.925580 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.925717 kubelet[1561]: E0209 18:35:50.925704 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.925741 kubelet[1561]: W0209 18:35:50.925717 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.925741 kubelet[1561]: E0209 18:35:50.925727 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.924000 audit[1905]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.924000 audit[1905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe280f670 a2=0 a3=ffff99b396c0 items=0 ppid=1778 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.924000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 18:35:50.926008 kubelet[1561]: E0209 18:35:50.925881 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.926008 kubelet[1561]: W0209 18:35:50.925902 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.926008 kubelet[1561]: E0209 18:35:50.925913 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.926094 kubelet[1561]: E0209 18:35:50.926083 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.926094 kubelet[1561]: W0209 18:35:50.926092 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.926135 kubelet[1561]: E0209 18:35:50.926102 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.926268 kubelet[1561]: E0209 18:35:50.926259 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.926268 kubelet[1561]: W0209 18:35:50.926268 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.926311 kubelet[1561]: E0209 18:35:50.926278 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.926485 kubelet[1561]: E0209 18:35:50.926472 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.926485 kubelet[1561]: W0209 18:35:50.926483 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.926557 kubelet[1561]: E0209 18:35:50.926493 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.926654 kubelet[1561]: E0209 18:35:50.926642 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.926654 kubelet[1561]: W0209 18:35:50.926653 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.926700 kubelet[1561]: E0209 18:35:50.926662 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.926804 kubelet[1561]: E0209 18:35:50.926795 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.926827 kubelet[1561]: W0209 18:35:50.926804 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.926827 kubelet[1561]: E0209 18:35:50.926814 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.925000 audit[1911]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.925000 audit[1911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb9b3980 a2=0 a3=ffffa04f26c0 items=0 ppid=1778 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 18:35:50.927055 kubelet[1561]: E0209 18:35:50.927045 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.927087 kubelet[1561]: W0209 18:35:50.927055 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.927087 kubelet[1561]: E0209 18:35:50.927065 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.927968 kubelet[1561]: E0209 18:35:50.927945 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.927968 kubelet[1561]: W0209 18:35:50.927962 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.928039 kubelet[1561]: E0209 18:35:50.927977 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.928170 kubelet[1561]: E0209 18:35:50.928148 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.928170 kubelet[1561]: W0209 18:35:50.928161 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.928225 kubelet[1561]: E0209 18:35:50.928173 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.928494 kubelet[1561]: E0209 18:35:50.928481 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:50.928494 kubelet[1561]: W0209 18:35:50.928494 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:50.928607 kubelet[1561]: E0209 18:35:50.928507 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:50.928000 audit[1921]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_rule pid=1921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.928000 audit[1921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff0099530 a2=0 a3=ffff94e926c0 items=0 ppid=1778 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.928000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 18:35:50.931000 audit[1924]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.931000 audit[1924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffeda66e00 a2=0 a3=ffff90b0f6c0 items=0 ppid=1778 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.931000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 18:35:50.932000 audit[1925]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.932000 audit[1925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1eacfa0 a2=0 a3=ffff8f61a6c0 items=0 ppid=1778 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 18:35:50.934000 audit[1927]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.934000 audit[1927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdf036d00 a2=0 a3=ffffafc2d6c0 items=0 ppid=1778 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 18:35:50.935000 audit[1928]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.935000 audit[1928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf6e8650 a2=0 a3=ffffb9a136c0 items=0 ppid=1778 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 18:35:50.937000 audit[1930]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.937000 audit[1930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebe21600 a2=0 a3=ffff80a476c0 items=0 ppid=1778 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:35:50.940000 audit[1933]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=1933 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.940000 audit[1933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff566ef10 a2=0 a3=ffffbf5586c0 items=0 ppid=1778 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 18:35:50.943000 audit[1936]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_rule pid=1936 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.943000 audit[1936]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff9717210 a2=0 a3=ffff88cd16c0 items=0 ppid=1778 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 18:35:50.944000 audit[1937]: NETFILTER_CFG table=nat:76 family=10 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.944000 audit[1937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc6b0a3c0 a2=0 a3=ffffb1a6f6c0 items=0 ppid=1778 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 18:35:50.946000 audit[1939]: NETFILTER_CFG table=nat:77 family=10 entries=2 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.946000 audit[1939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff129e830 a2=0 a3=ffffb78586c0 items=0 ppid=1778 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:35:50.949000 audit[1942]: NETFILTER_CFG table=nat:78 family=10 entries=2 op=nft_register_chain pid=1942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:35:50.949000 audit[1942]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe44a8300 a2=0 a3=ffff9b96b6c0 items=0 ppid=1778 pid=1942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:35:50.954000 audit[1946]: NETFILTER_CFG table=filter:79 family=10 entries=3 op=nft_register_rule pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 18:35:50.954000 audit[1946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffeecbcd20 a2=0 a3=ffffa70296c0 items=0 ppid=1778 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.954000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:50.954000 audit[1946]: NETFILTER_CFG table=nat:80 family=10 entries=10 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 18:35:50.954000 audit[1946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffeecbcd20 a2=0 a3=ffffa70296c0 items=0 ppid=1778 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:50.954000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:51.009790 kubelet[1561]: E0209 18:35:51.009768 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.009790 kubelet[1561]: W0209 18:35:51.009786 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.009950 kubelet[1561]: E0209 18:35:51.009805 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.010012 kubelet[1561]: E0209 18:35:51.010001 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.010065 kubelet[1561]: W0209 18:35:51.010012 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.010065 kubelet[1561]: E0209 18:35:51.010030 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.010253 kubelet[1561]: E0209 18:35:51.010238 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.010292 kubelet[1561]: W0209 18:35:51.010254 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.010292 kubelet[1561]: E0209 18:35:51.010276 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.010435 kubelet[1561]: E0209 18:35:51.010426 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.010435 kubelet[1561]: W0209 18:35:51.010436 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.010508 kubelet[1561]: E0209 18:35:51.010452 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.010624 kubelet[1561]: E0209 18:35:51.010615 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.010624 kubelet[1561]: W0209 18:35:51.010624 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.010694 kubelet[1561]: E0209 18:35:51.010638 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.010791 kubelet[1561]: E0209 18:35:51.010783 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.010791 kubelet[1561]: W0209 18:35:51.010792 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.010850 kubelet[1561]: E0209 18:35:51.010806 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.011015 kubelet[1561]: E0209 18:35:51.011002 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.011051 kubelet[1561]: W0209 18:35:51.011015 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.011051 kubelet[1561]: E0209 18:35:51.011032 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.011197 kubelet[1561]: E0209 18:35:51.011189 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.011197 kubelet[1561]: W0209 18:35:51.011197 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.011263 kubelet[1561]: E0209 18:35:51.011210 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.011347 kubelet[1561]: E0209 18:35:51.011338 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.011347 kubelet[1561]: W0209 18:35:51.011347 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.011407 kubelet[1561]: E0209 18:35:51.011360 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.011518 kubelet[1561]: E0209 18:35:51.011508 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.011518 kubelet[1561]: W0209 18:35:51.011518 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.011578 kubelet[1561]: E0209 18:35:51.011531 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.011758 kubelet[1561]: E0209 18:35:51.011744 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.011794 kubelet[1561]: W0209 18:35:51.011760 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.011794 kubelet[1561]: E0209 18:35:51.011781 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.011958 kubelet[1561]: E0209 18:35:51.011949 1561 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:35:51.011958 kubelet[1561]: W0209 18:35:51.011958 1561 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:35:51.012021 kubelet[1561]: E0209 18:35:51.011968 1561 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:35:51.538999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3841047324.mount: Deactivated successfully. Feb 9 18:35:51.605261 env[1218]: time="2024-02-09T18:35:51.605211129Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:51.606390 env[1218]: time="2024-02-09T18:35:51.606354323Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:51.607681 env[1218]: time="2024-02-09T18:35:51.607649024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:51.608752 env[1218]: time="2024-02-09T18:35:51.608728162Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:51.609098 env[1218]: time="2024-02-09T18:35:51.609068521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 9 18:35:51.610821 env[1218]: time="2024-02-09T18:35:51.610793149Z" level=info msg="CreateContainer within sandbox \"9b839b9a977632c37387570374949f6b9c6fdac3e1364fc6500062162346adac\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 18:35:51.621163 env[1218]: time="2024-02-09T18:35:51.621127005Z" level=info msg="CreateContainer within sandbox \"9b839b9a977632c37387570374949f6b9c6fdac3e1364fc6500062162346adac\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5bafd219828041173906c16366968860d5033fe23ea4c272250923137ce1d894\"" Feb 9 18:35:51.621622 env[1218]: time="2024-02-09T18:35:51.621597198Z" level=info msg="StartContainer for \"5bafd219828041173906c16366968860d5033fe23ea4c272250923137ce1d894\"" Feb 9 18:35:51.672159 env[1218]: time="2024-02-09T18:35:51.672116725Z" level=info msg="StartContainer for \"5bafd219828041173906c16366968860d5033fe23ea4c272250923137ce1d894\" returns successfully" Feb 9 18:35:51.712137 kubelet[1561]: E0209 18:35:51.712095 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:51.850588 kubelet[1561]: E0209 18:35:51.850480 1561 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gm2m" podUID=8d972391-7603-4ba7-9c5f-70ab2777349a Feb 9 18:35:51.882071 kubelet[1561]: E0209 18:35:51.881817 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:51.882071 kubelet[1561]: E0209 18:35:51.881818 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:51.941109 env[1218]: time="2024-02-09T18:35:51.941066438Z" level=info msg="shim disconnected" id=5bafd219828041173906c16366968860d5033fe23ea4c272250923137ce1d894 Feb 9 18:35:51.941109 env[1218]: time="2024-02-09T18:35:51.941109183Z" level=warning msg="cleaning up after shim disconnected" id=5bafd219828041173906c16366968860d5033fe23ea4c272250923137ce1d894 namespace=k8s.io Feb 9 18:35:51.941268 env[1218]: time="2024-02-09T18:35:51.941120219Z" level=info msg="cleaning up dead shim" Feb 9 18:35:51.948624 env[1218]: time="2024-02-09T18:35:51.948159083Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:35:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2011 runtime=io.containerd.runc.v2\n" Feb 9 18:35:52.082000 audit[2048]: NETFILTER_CFG table=filter:81 family=2 entries=7 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:52.082000 audit[2048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffeee2c240 a2=0 a3=ffffbc8a36c0 items=0 ppid=1778 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.082000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:52.098000 audit[2048]: NETFILTER_CFG table=nat:82 family=2 entries=75 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:52.098000 audit[2048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffeee2c240 a2=0 a3=ffffbc8a36c0 items=0 ppid=1778 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:52.098000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:52.712421 kubelet[1561]: E0209 18:35:52.712385 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:52.884472 kubelet[1561]: E0209 18:35:52.884403 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:52.885097 env[1218]: time="2024-02-09T18:35:52.885063955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 18:35:53.713273 kubelet[1561]: E0209 18:35:53.713228 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:53.850698 kubelet[1561]: E0209 18:35:53.850209 1561 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gm2m" podUID=8d972391-7603-4ba7-9c5f-70ab2777349a Feb 9 18:35:53.851313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2921620731.mount: Deactivated successfully. Feb 9 18:35:54.713712 kubelet[1561]: E0209 18:35:54.713669 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:55.563195 env[1218]: time="2024-02-09T18:35:55.563142897Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:55.564660 env[1218]: time="2024-02-09T18:35:55.564623457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:55.565944 env[1218]: time="2024-02-09T18:35:55.565913618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:55.567209 env[1218]: time="2024-02-09T18:35:55.567183664Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:55.567613 env[1218]: time="2024-02-09T18:35:55.567585297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 9 18:35:55.569500 env[1218]: time="2024-02-09T18:35:55.569475608Z" level=info msg="CreateContainer within sandbox \"9b839b9a977632c37387570374949f6b9c6fdac3e1364fc6500062162346adac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 18:35:55.581630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604559988.mount: Deactivated successfully. Feb 9 18:35:55.584905 env[1218]: time="2024-02-09T18:35:55.584855525Z" level=info msg="CreateContainer within sandbox \"9b839b9a977632c37387570374949f6b9c6fdac3e1364fc6500062162346adac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"286f994c3c67aa68683ec65704b5dfb69b129d5085cc832828ebe7f0541ff64b\"" Feb 9 18:35:55.585407 env[1218]: time="2024-02-09T18:35:55.585379972Z" level=info msg="StartContainer for \"286f994c3c67aa68683ec65704b5dfb69b129d5085cc832828ebe7f0541ff64b\"" Feb 9 18:35:55.659867 env[1218]: time="2024-02-09T18:35:55.658114935Z" level=info msg="StartContainer for \"286f994c3c67aa68683ec65704b5dfb69b129d5085cc832828ebe7f0541ff64b\" returns successfully" Feb 9 18:35:55.713964 kubelet[1561]: E0209 18:35:55.713917 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:55.850725 kubelet[1561]: E0209 18:35:55.850562 1561 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gm2m" podUID=8d972391-7603-4ba7-9c5f-70ab2777349a Feb 9 18:35:55.893782 kubelet[1561]: E0209 18:35:55.893383 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:56.310750 env[1218]: time="2024-02-09T18:35:56.310686892Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:35:56.326766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-286f994c3c67aa68683ec65704b5dfb69b129d5085cc832828ebe7f0541ff64b-rootfs.mount: Deactivated successfully. Feb 9 18:35:56.388604 kubelet[1561]: I0209 18:35:56.387941 1561 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:35:56.421985 env[1218]: time="2024-02-09T18:35:56.421938385Z" level=info msg="shim disconnected" id=286f994c3c67aa68683ec65704b5dfb69b129d5085cc832828ebe7f0541ff64b Feb 9 18:35:56.421985 env[1218]: time="2024-02-09T18:35:56.421982582Z" level=warning msg="cleaning up after shim disconnected" id=286f994c3c67aa68683ec65704b5dfb69b129d5085cc832828ebe7f0541ff64b namespace=k8s.io Feb 9 18:35:56.421985 env[1218]: time="2024-02-09T18:35:56.421993061Z" level=info msg="cleaning up dead shim" Feb 9 18:35:56.428032 env[1218]: time="2024-02-09T18:35:56.427996775Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:35:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2106 runtime=io.containerd.runc.v2\n" Feb 9 18:35:56.714419 kubelet[1561]: E0209 18:35:56.714245 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:56.897120 kubelet[1561]: E0209 18:35:56.897093 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:35:56.898293 env[1218]: time="2024-02-09T18:35:56.898258673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 18:35:57.715157 kubelet[1561]: E0209 18:35:57.715105 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:57.856851 env[1218]: time="2024-02-09T18:35:57.856286294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gm2m,Uid:8d972391-7603-4ba7-9c5f-70ab2777349a,Namespace:calico-system,Attempt:0,}" Feb 9 18:35:57.979996 env[1218]: time="2024-02-09T18:35:57.977980234Z" level=error msg="Failed to destroy network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:57.982814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca-shm.mount: Deactivated successfully. Feb 9 18:35:57.984206 env[1218]: time="2024-02-09T18:35:57.984163680Z" level=error msg="encountered an error cleaning up failed sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:57.984355 env[1218]: time="2024-02-09T18:35:57.984326789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gm2m,Uid:8d972391-7603-4ba7-9c5f-70ab2777349a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:57.984957 kubelet[1561]: E0209 18:35:57.984608 1561 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:57.984957 kubelet[1561]: E0209 18:35:57.984662 1561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6gm2m" Feb 9 18:35:57.984957 kubelet[1561]: E0209 18:35:57.984684 1561 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6gm2m" Feb 9 18:35:57.985112 kubelet[1561]: E0209 18:35:57.984734 1561 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6gm2m_calico-system(8d972391-7603-4ba7-9c5f-70ab2777349a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6gm2m_calico-system(8d972391-7603-4ba7-9c5f-70ab2777349a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6gm2m" podUID=8d972391-7603-4ba7-9c5f-70ab2777349a Feb 9 18:35:58.715604 kubelet[1561]: E0209 18:35:58.715557 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:35:58.904800 kubelet[1561]: I0209 18:35:58.904229 1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:35:58.905297 env[1218]: time="2024-02-09T18:35:58.905265463Z" level=info msg="StopPodSandbox for \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\"" Feb 9 18:35:58.926812 env[1218]: time="2024-02-09T18:35:58.926761397Z" level=error msg="StopPodSandbox for \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\" failed" error="failed to destroy network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:35:58.927242 kubelet[1561]: E0209 18:35:58.927108 1561 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:35:58.927242 kubelet[1561]: E0209 18:35:58.927152 1561 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca} Feb 9 18:35:58.927242 kubelet[1561]: E0209 18:35:58.927184 1561 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d972391-7603-4ba7-9c5f-70ab2777349a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:35:58.927242 kubelet[1561]: E0209 18:35:58.927209 1561 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d972391-7603-4ba7-9c5f-70ab2777349a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6gm2m" podUID=8d972391-7603-4ba7-9c5f-70ab2777349a Feb 9 18:35:59.716187 kubelet[1561]: E0209 18:35:59.716141 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:00.011757 kubelet[1561]: I0209 18:36:00.011723 1561 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:36:00.165673 kubelet[1561]: I0209 18:36:00.165536 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crhps\" (UniqueName: \"kubernetes.io/projected/b02c0b3a-13c2-4298-99e1-5776ed917536-kube-api-access-crhps\") pod \"nginx-deployment-8ffc5cf85-q8gfg\" (UID: \"b02c0b3a-13c2-4298-99e1-5776ed917536\") " pod="default/nginx-deployment-8ffc5cf85-q8gfg" Feb 9 18:36:00.316986 env[1218]: time="2024-02-09T18:36:00.316625731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-q8gfg,Uid:b02c0b3a-13c2-4298-99e1-5776ed917536,Namespace:default,Attempt:0,}" Feb 9 18:36:00.660365 env[1218]: time="2024-02-09T18:36:00.658155287Z" level=error msg="Failed to destroy network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:36:00.660365 env[1218]: time="2024-02-09T18:36:00.658479948Z" level=error msg="encountered an error cleaning up failed sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:36:00.660365 env[1218]: time="2024-02-09T18:36:00.658522185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-q8gfg,Uid:b02c0b3a-13c2-4298-99e1-5776ed917536,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:36:00.659670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e-shm.mount: Deactivated successfully. Feb 9 18:36:00.660795 kubelet[1561]: E0209 18:36:00.658745 1561 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:36:00.660795 kubelet[1561]: E0209 18:36:00.658803 1561 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-q8gfg" Feb 9 18:36:00.660795 kubelet[1561]: E0209 18:36:00.658825 1561 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8ffc5cf85-q8gfg" Feb 9 18:36:00.660922 kubelet[1561]: E0209 18:36:00.658887 1561 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8ffc5cf85-q8gfg_default(b02c0b3a-13c2-4298-99e1-5776ed917536)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8ffc5cf85-q8gfg_default(b02c0b3a-13c2-4298-99e1-5776ed917536)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-q8gfg" podUID=b02c0b3a-13c2-4298-99e1-5776ed917536 Feb 9 18:36:00.717231 kubelet[1561]: E0209 18:36:00.717172 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:00.907239 kubelet[1561]: I0209 18:36:00.907200 1561 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:00.907817 env[1218]: time="2024-02-09T18:36:00.907766132Z" level=info msg="StopPodSandbox for \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\"" Feb 9 18:36:00.929183 env[1218]: time="2024-02-09T18:36:00.928646294Z" level=error msg="StopPodSandbox for \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\" failed" error="failed to destroy network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:36:00.929296 kubelet[1561]: E0209 18:36:00.928922 1561 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:00.929296 kubelet[1561]: E0209 18:36:00.928975 1561 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e} Feb 9 18:36:00.929296 kubelet[1561]: E0209 18:36:00.929006 1561 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b02c0b3a-13c2-4298-99e1-5776ed917536\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:36:00.929296 kubelet[1561]: E0209 18:36:00.929042 1561 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b02c0b3a-13c2-4298-99e1-5776ed917536\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8ffc5cf85-q8gfg" podUID=b02c0b3a-13c2-4298-99e1-5776ed917536 Feb 9 18:36:01.122447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707168377.mount: Deactivated successfully. Feb 9 18:36:01.157641 env[1218]: time="2024-02-09T18:36:01.157574944Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:01.159762 env[1218]: time="2024-02-09T18:36:01.159717584Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:01.161289 env[1218]: time="2024-02-09T18:36:01.161254498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:01.162945 env[1218]: time="2024-02-09T18:36:01.162916964Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:01.163438 env[1218]: time="2024-02-09T18:36:01.163410497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 9 18:36:01.169761 env[1218]: time="2024-02-09T18:36:01.169692544Z" level=info msg="CreateContainer within sandbox \"9b839b9a977632c37387570374949f6b9c6fdac3e1364fc6500062162346adac\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 18:36:01.185769 env[1218]: time="2024-02-09T18:36:01.185270671Z" level=info msg="CreateContainer within sandbox \"9b839b9a977632c37387570374949f6b9c6fdac3e1364fc6500062162346adac\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2b5f36ea834e89fd494f9767fa284e12c17fbf91f383689755942cc526e5210a\"" Feb 9 18:36:01.185977 env[1218]: time="2024-02-09T18:36:01.185937193Z" level=info msg="StartContainer for \"2b5f36ea834e89fd494f9767fa284e12c17fbf91f383689755942cc526e5210a\"" Feb 9 18:36:01.248146 env[1218]: time="2024-02-09T18:36:01.248096907Z" level=info msg="StartContainer for \"2b5f36ea834e89fd494f9767fa284e12c17fbf91f383689755942cc526e5210a\" returns successfully" Feb 9 18:36:01.350265 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 18:36:01.350376 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 18:36:01.717712 kubelet[1561]: E0209 18:36:01.717628 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:01.911002 kubelet[1561]: E0209 18:36:01.910974 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:01.928310 kubelet[1561]: I0209 18:36:01.928271 1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-br586" podStartSLOduration=-9.223372017926542e+09 pod.CreationTimestamp="2024-02-09 18:35:43 +0000 UTC" firstStartedPulling="2024-02-09 18:35:49.264766047 +0000 UTC m=+19.840450179" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:01.926353907 +0000 UTC m=+32.502038039" watchObservedRunningTime="2024-02-09 18:36:01.928233481 +0000 UTC m=+32.503917613" Feb 9 18:36:02.595000 audit[2395]: AVC avc: denied { write } for pid=2395 comm="tee" name="fd" dev="proc" ino=15125 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.599629 kernel: kauditd_printk_skb: 134 callbacks suppressed Feb 9 18:36:02.599714 kernel: audit: type=1400 audit(1707503762.595:242): avc: denied { write } for pid=2395 comm="tee" name="fd" dev="proc" ino=15125 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.599737 kernel: audit: type=1300 audit(1707503762.595:242): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe0d0098f a2=241 a3=1b6 items=1 ppid=2344 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.595000 audit[2395]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe0d0098f a2=241 a3=1b6 items=1 ppid=2344 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.595000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 18:36:02.603315 kernel: audit: type=1307 audit(1707503762.595:242): cwd="/etc/service/enabled/bird6/log" Feb 9 18:36:02.603385 kernel: audit: type=1302 audit(1707503762.595:242): item=0 name="/dev/fd/63" inode=15122 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.595000 audit: PATH item=0 name="/dev/fd/63" inode=15122 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.595000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.607046 kernel: audit: type=1327 audit(1707503762.595:242): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.601000 audit[2401]: AVC avc: denied { write } for pid=2401 comm="tee" name="fd" dev="proc" ino=15132 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.612033 kernel: audit: type=1400 audit(1707503762.601:243): avc: denied { write } for pid=2401 comm="tee" name="fd" dev="proc" ino=15132 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.612100 kernel: audit: type=1300 audit(1707503762.601:243): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff16e398f a2=241 a3=1b6 items=1 ppid=2363 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.601000 audit[2401]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff16e398f a2=241 a3=1b6 items=1 ppid=2363 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.601000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 18:36:02.601000 audit: PATH item=0 name="/dev/fd/63" inode=15129 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.621694 kernel: audit: type=1307 audit(1707503762.601:243): cwd="/etc/service/enabled/confd/log" Feb 9 18:36:02.621757 kernel: audit: type=1302 audit(1707503762.601:243): item=0 name="/dev/fd/63" inode=15129 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.626603 kernel: audit: type=1327 audit(1707503762.601:243): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.611000 audit[2398]: AVC avc: denied { write } for pid=2398 comm="tee" name="fd" dev="proc" ino=13911 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.611000 audit[2398]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd9e8e980 a2=241 a3=1b6 items=1 ppid=2343 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.611000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 18:36:02.611000 audit: PATH item=0 name="/dev/fd/63" inode=13905 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.617000 audit[2406]: AVC avc: denied { write } for pid=2406 comm="tee" name="fd" dev="proc" ino=15413 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.617000 audit[2406]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc2bbd97f a2=241 a3=1b6 items=1 ppid=2345 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.617000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 18:36:02.617000 audit: PATH item=0 name="/dev/fd/63" inode=15410 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.617000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.633000 audit[2410]: AVC avc: denied { write } for pid=2410 comm="tee" name="fd" dev="proc" ino=15143 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.633000 audit[2410]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffda648990 a2=241 a3=1b6 items=1 ppid=2357 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.633000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 18:36:02.633000 audit: PATH item=0 name="/dev/fd/63" inode=13908 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.638000 audit[2420]: AVC avc: denied { write } for pid=2420 comm="tee" name="fd" dev="proc" ino=13915 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.638000 audit[2420]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcbda898f a2=241 a3=1b6 items=1 ppid=2351 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.638000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 18:36:02.638000 audit: PATH item=0 name="/dev/fd/63" inode=16430 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.642000 audit[2424]: AVC avc: denied { write } for pid=2424 comm="tee" name="fd" dev="proc" ino=13925 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:36:02.642000 audit[2424]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff66ea991 a2=241 a3=1b6 items=1 ppid=2367 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.642000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 18:36:02.642000 audit: PATH item=0 name="/dev/fd/63" inode=16431 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:02.642000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:36:02.718264 kubelet[1561]: E0209 18:36:02.718227 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:02.747895 kernel: Initializing XFRM netlink socket Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit: BPF prog-id=10 op=LOAD Feb 9 18:36:02.842000 audit[2491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff87a0d58 a2=70 a3=0 items=0 ppid=2352 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.842000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:36:02.842000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit: BPF prog-id=11 op=LOAD Feb 9 18:36:02.842000 audit[2491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff87a0d58 a2=70 a3=4a174c items=0 ppid=2352 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.842000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:36:02.842000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:36:02.842000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.842000 audit[2491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=fffff87a0d88 a2=70 a3=7f379f items=0 ppid=2352 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.842000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { perfmon } for pid=2491 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit[2491]: AVC avc: denied { bpf } for pid=2491 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.843000 audit: BPF prog-id=12 op=LOAD Feb 9 18:36:02.843000 audit[2491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff87a0cd8 a2=70 a3=7f37b9 items=0 ppid=2352 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.843000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:36:02.845000 audit[2495]: AVC avc: denied { bpf } for pid=2495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.845000 audit[2495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd57c4578 a2=70 a3=0 items=0 ppid=2352 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 18:36:02.845000 audit[2495]: AVC avc: denied { bpf } for pid=2495 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:36:02.845000 audit[2495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd57c4458 a2=70 a3=2 items=0 ppid=2352 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 18:36:02.857000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:36:02.901000 audit[2521]: NETFILTER_CFG table=mangle:83 family=2 entries=19 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:02.901000 audit[2521]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=ffffc40487d0 a2=0 a3=ffffafc78fa8 items=0 ppid=2352 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.901000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:02.912325 kubelet[1561]: E0209 18:36:02.911961 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:02.916000 audit[2520]: NETFILTER_CFG table=raw:84 family=2 entries=19 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:02.916000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffc2a52a20 a2=0 a3=ffffa07dcfa8 items=0 ppid=2352 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.916000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:02.921000 audit[2528]: NETFILTER_CFG table=nat:85 family=2 entries=16 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:02.921000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffeb316fb0 a2=0 a3=ffff9fd46fa8 items=0 ppid=2352 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.921000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:02.922000 audit[2524]: NETFILTER_CFG table=filter:86 family=2 entries=39 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:02.922000 audit[2524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18472 a0=3 a1=ffffd83351a0 a2=0 a3=ffffa7869fa8 items=0 ppid=2352 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:02.922000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:02.935171 systemd[1]: run-containerd-runc-k8s.io-2b5f36ea834e89fd494f9767fa284e12c17fbf91f383689755942cc526e5210a-runc.6BljnJ.mount: Deactivated successfully. Feb 9 18:36:03.718588 kubelet[1561]: E0209 18:36:03.718538 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:03.758540 systemd-networkd[1103]: vxlan.calico: Link UP Feb 9 18:36:03.758546 systemd-networkd[1103]: vxlan.calico: Gained carrier Feb 9 18:36:04.719605 kubelet[1561]: E0209 18:36:04.719563 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:04.966063 systemd-networkd[1103]: vxlan.calico: Gained IPv6LL Feb 9 18:36:05.720033 kubelet[1561]: E0209 18:36:05.719989 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:06.720751 kubelet[1561]: E0209 18:36:06.720710 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:07.721144 kubelet[1561]: E0209 18:36:07.721092 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:08.003983 update_engine[1204]: I0209 18:36:08.003732 1204 update_attempter.cc:509] Updating boot flags... Feb 9 18:36:08.722151 kubelet[1561]: E0209 18:36:08.722102 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:09.722261 kubelet[1561]: E0209 18:36:09.722220 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:10.699931 kubelet[1561]: E0209 18:36:10.699859 1561 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:10.723439 kubelet[1561]: E0209 18:36:10.723372 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:10.851225 env[1218]: time="2024-02-09T18:36:10.851156953Z" level=info msg="StopPodSandbox for \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\"" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.895 [INFO][2598] k8s.go 578: Cleaning up netns ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.895 [INFO][2598] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" iface="eth0" netns="/var/run/netns/cni-d57a8c9d-cc73-a71e-129e-dc3b61542be4" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.895 [INFO][2598] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" iface="eth0" netns="/var/run/netns/cni-d57a8c9d-cc73-a71e-129e-dc3b61542be4" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.896 [INFO][2598] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" iface="eth0" netns="/var/run/netns/cni-d57a8c9d-cc73-a71e-129e-dc3b61542be4" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.896 [INFO][2598] k8s.go 585: Releasing IP address(es) ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.896 [INFO][2598] utils.go 188: Calico CNI releasing IP address ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.950 [INFO][2606] ipam_plugin.go 415: Releasing address using handleID ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.950 [INFO][2606] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.950 [INFO][2606] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.960 [WARNING][2606] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.960 [INFO][2606] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.962 [INFO][2606] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:10.965465 env[1218]: 2024-02-09 18:36:10.963 [INFO][2598] k8s.go 591: Teardown processing complete. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:10.965465 env[1218]: time="2024-02-09T18:36:10.964355269Z" level=info msg="TearDown network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\" successfully" Feb 9 18:36:10.965465 env[1218]: time="2024-02-09T18:36:10.964385428Z" level=info msg="StopPodSandbox for \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\" returns successfully" Feb 9 18:36:10.965465 env[1218]: time="2024-02-09T18:36:10.965098283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gm2m,Uid:8d972391-7603-4ba7-9c5f-70ab2777349a,Namespace:calico-system,Attempt:1,}" Feb 9 18:36:10.965719 systemd[1]: run-netns-cni\x2dd57a8c9d\x2dcc73\x2da71e\x2d129e\x2ddc3b61542be4.mount: Deactivated successfully. Feb 9 18:36:11.064189 systemd-networkd[1103]: cali83bc453dce9: Link UP Feb 9 18:36:11.065464 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:36:11.065536 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali83bc453dce9: link becomes ready Feb 9 18:36:11.065581 systemd-networkd[1103]: cali83bc453dce9: Gained carrier Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.004 [INFO][2614] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-csi--node--driver--6gm2m-eth0 csi-node-driver- calico-system 8d972391-7603-4ba7-9c5f-70ab2777349a 934 0 2024-02-09 18:35:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.94 csi-node-driver-6gm2m eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali83bc453dce9 [] []}} ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Namespace="calico-system" Pod="csi-node-driver-6gm2m" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--6gm2m-" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.004 [INFO][2614] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Namespace="calico-system" Pod="csi-node-driver-6gm2m" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.026 [INFO][2628] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" HandleID="k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.038 [INFO][2628] ipam_plugin.go 268: Auto assigning IP ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" HandleID="k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000229ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.94", "pod":"csi-node-driver-6gm2m", "timestamp":"2024-02-09 18:36:11.026534215 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.038 [INFO][2628] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.038 [INFO][2628] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.038 [INFO][2628] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.040 [INFO][2628] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.044 [INFO][2628] ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.048 [INFO][2628] ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.050 [INFO][2628] ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.052 [INFO][2628] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.052 [INFO][2628] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.053 [INFO][2628] ipam.go 1682: Creating new handle: k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.056 [INFO][2628] ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.060 [INFO][2628] ipam.go 1216: Successfully claimed IPs: [192.168.24.1/26] block=192.168.24.0/26 handle="k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.060 [INFO][2628] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.1/26] handle="k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" host="10.0.0.94" Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.060 [INFO][2628] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:11.075944 env[1218]: 2024-02-09 18:36:11.060 [INFO][2628] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.24.1/26] IPv6=[] ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" HandleID="k8s-pod-network.beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:11.076472 env[1218]: 2024-02-09 18:36:11.062 [INFO][2614] k8s.go 385: Populated endpoint ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Namespace="calico-system" Pod="csi-node-driver-6gm2m" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-csi--node--driver--6gm2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d972391-7603-4ba7-9c5f-70ab2777349a", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"csi-node-driver-6gm2m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83bc453dce9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:11.076472 env[1218]: 2024-02-09 18:36:11.062 [INFO][2614] k8s.go 386: Calico CNI using IPs: [192.168.24.1/32] ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Namespace="calico-system" Pod="csi-node-driver-6gm2m" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:11.076472 env[1218]: 2024-02-09 18:36:11.062 [INFO][2614] dataplane_linux.go 68: Setting the host side veth name to cali83bc453dce9 ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Namespace="calico-system" Pod="csi-node-driver-6gm2m" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:11.076472 env[1218]: 2024-02-09 18:36:11.065 [INFO][2614] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Namespace="calico-system" Pod="csi-node-driver-6gm2m" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:11.076472 env[1218]: 2024-02-09 18:36:11.065 [INFO][2614] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Namespace="calico-system" Pod="csi-node-driver-6gm2m" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-csi--node--driver--6gm2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d972391-7603-4ba7-9c5f-70ab2777349a", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a", Pod:"csi-node-driver-6gm2m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83bc453dce9", MAC:"fe:08:bd:0a:07:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:11.076472 env[1218]: 2024-02-09 18:36:11.074 [INFO][2614] k8s.go 491: Wrote updated endpoint to datastore ContainerID="beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a" Namespace="calico-system" Pod="csi-node-driver-6gm2m" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:11.086000 audit[2651]: NETFILTER_CFG table=filter:87 family=2 entries=36 op=nft_register_chain pid=2651 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:11.087991 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 18:36:11.088064 kernel: audit: type=1325 audit(1707503771.086:262): table=filter:87 family=2 entries=36 op=nft_register_chain pid=2651 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:11.086000 audit[2651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19908 a0=3 a1=ffffcf254370 a2=0 a3=ffff8ce49fa8 items=0 ppid=2352 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:11.089948 env[1218]: time="2024-02-09T18:36:11.089874224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:11.089948 env[1218]: time="2024-02-09T18:36:11.089917142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:11.089948 env[1218]: time="2024-02-09T18:36:11.089927462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:11.090319 env[1218]: time="2024-02-09T18:36:11.090283690Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a pid=2658 runtime=io.containerd.runc.v2 Feb 9 18:36:11.092159 kernel: audit: type=1300 audit(1707503771.086:262): arch=c00000b7 syscall=211 success=yes exit=19908 a0=3 a1=ffffcf254370 a2=0 a3=ffff8ce49fa8 items=0 ppid=2352 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:11.092212 kernel: audit: type=1327 audit(1707503771.086:262): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:11.086000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:11.126066 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:36:11.136470 env[1218]: time="2024-02-09T18:36:11.136429391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gm2m,Uid:8d972391-7603-4ba7-9c5f-70ab2777349a,Namespace:calico-system,Attempt:1,} returns sandbox id \"beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a\"" Feb 9 18:36:11.138005 env[1218]: time="2024-02-09T18:36:11.137955221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 18:36:11.724214 kubelet[1561]: E0209 18:36:11.724163 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:12.022273 env[1218]: time="2024-02-09T18:36:12.022232372Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:12.024060 env[1218]: time="2024-02-09T18:36:12.024022475Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:12.025587 env[1218]: time="2024-02-09T18:36:12.025561107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:12.027185 env[1218]: time="2024-02-09T18:36:12.027149496Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:12.027641 env[1218]: time="2024-02-09T18:36:12.027613801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 9 18:36:12.029120 env[1218]: time="2024-02-09T18:36:12.029088234Z" level=info msg="CreateContainer within sandbox \"beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 18:36:12.038985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731214519.mount: Deactivated successfully. Feb 9 18:36:12.042752 env[1218]: time="2024-02-09T18:36:12.042706202Z" level=info msg="CreateContainer within sandbox \"beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"03c2226155ce5c99884793866914c8775f57a4eba792205b830c299b23aae340\"" Feb 9 18:36:12.043219 env[1218]: time="2024-02-09T18:36:12.043185107Z" level=info msg="StartContainer for \"03c2226155ce5c99884793866914c8775f57a4eba792205b830c299b23aae340\"" Feb 9 18:36:12.105530 env[1218]: time="2024-02-09T18:36:12.105479327Z" level=info msg="StartContainer for \"03c2226155ce5c99884793866914c8775f57a4eba792205b830c299b23aae340\" returns successfully" Feb 9 18:36:12.106722 env[1218]: time="2024-02-09T18:36:12.106693129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 18:36:12.646002 systemd-networkd[1103]: cali83bc453dce9: Gained IPv6LL Feb 9 18:36:12.725232 kubelet[1561]: E0209 18:36:12.725188 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:13.152664 env[1218]: time="2024-02-09T18:36:13.152604957Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:13.154526 env[1218]: time="2024-02-09T18:36:13.154483780Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:13.155837 env[1218]: time="2024-02-09T18:36:13.155800900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:13.157091 env[1218]: time="2024-02-09T18:36:13.157055142Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:13.157531 env[1218]: time="2024-02-09T18:36:13.157485129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 9 18:36:13.159229 env[1218]: time="2024-02-09T18:36:13.159193157Z" level=info msg="CreateContainer within sandbox \"beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 18:36:13.168968 env[1218]: time="2024-02-09T18:36:13.168923582Z" level=info msg="CreateContainer within sandbox \"beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b0aa36eb7866ace5952784d8516854f069d0de3a91463a4317bdfecae1699226\"" Feb 9 18:36:13.169510 env[1218]: time="2024-02-09T18:36:13.169472366Z" level=info msg="StartContainer for \"b0aa36eb7866ace5952784d8516854f069d0de3a91463a4317bdfecae1699226\"" Feb 9 18:36:13.188825 systemd[1]: run-containerd-runc-k8s.io-b0aa36eb7866ace5952784d8516854f069d0de3a91463a4317bdfecae1699226-runc.YXf5q2.mount: Deactivated successfully. Feb 9 18:36:13.247125 env[1218]: time="2024-02-09T18:36:13.247081214Z" level=info msg="StartContainer for \"b0aa36eb7866ace5952784d8516854f069d0de3a91463a4317bdfecae1699226\" returns successfully" Feb 9 18:36:13.725628 kubelet[1561]: E0209 18:36:13.725588 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:13.781311 kubelet[1561]: I0209 18:36:13.781288 1561 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 18:36:13.781311 kubelet[1561]: I0209 18:36:13.781315 1561 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 18:36:13.945322 kubelet[1561]: I0209 18:36:13.945281 1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-6gm2m" podStartSLOduration=-9.223372005909527e+09 pod.CreationTimestamp="2024-02-09 18:35:43 +0000 UTC" firstStartedPulling="2024-02-09 18:36:11.137686069 +0000 UTC m=+41.713370201" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:13.945232179 +0000 UTC m=+44.520916311" watchObservedRunningTime="2024-02-09 18:36:13.945248098 +0000 UTC m=+44.520932230" Feb 9 18:36:14.726173 kubelet[1561]: E0209 18:36:14.726125 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:15.727571 kubelet[1561]: E0209 18:36:15.727531 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:16.731143 kubelet[1561]: E0209 18:36:16.731040 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:16.851618 env[1218]: time="2024-02-09T18:36:16.851578732Z" level=info msg="StopPodSandbox for \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\"" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.888 [INFO][2789] k8s.go 578: Cleaning up netns ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.888 [INFO][2789] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" iface="eth0" netns="/var/run/netns/cni-268bfc21-d04f-5749-cd77-d3140a2271ea" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.889 [INFO][2789] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" iface="eth0" netns="/var/run/netns/cni-268bfc21-d04f-5749-cd77-d3140a2271ea" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.889 [INFO][2789] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" iface="eth0" netns="/var/run/netns/cni-268bfc21-d04f-5749-cd77-d3140a2271ea" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.889 [INFO][2789] k8s.go 585: Releasing IP address(es) ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.889 [INFO][2789] utils.go 188: Calico CNI releasing IP address ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.905 [INFO][2796] ipam_plugin.go 415: Releasing address using handleID ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.905 [INFO][2796] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.905 [INFO][2796] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.915 [WARNING][2796] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.915 [INFO][2796] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.917 [INFO][2796] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:16.919248 env[1218]: 2024-02-09 18:36:16.918 [INFO][2789] k8s.go 591: Teardown processing complete. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:16.921217 systemd[1]: run-netns-cni\x2d268bfc21\x2dd04f\x2d5749\x2dcd77\x2dd3140a2271ea.mount: Deactivated successfully. Feb 9 18:36:16.921777 env[1218]: time="2024-02-09T18:36:16.921738158Z" level=info msg="TearDown network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\" successfully" Feb 9 18:36:16.921854 env[1218]: time="2024-02-09T18:36:16.921837876Z" level=info msg="StopPodSandbox for \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\" returns successfully" Feb 9 18:36:16.922544 env[1218]: time="2024-02-09T18:36:16.922514538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-q8gfg,Uid:b02c0b3a-13c2-4298-99e1-5776ed917536,Namespace:default,Attempt:1,}" Feb 9 18:36:17.020012 systemd-networkd[1103]: cali993f08f68af: Link UP Feb 9 18:36:17.021473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:36:17.021519 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali993f08f68af: link becomes ready Feb 9 18:36:17.021393 systemd-networkd[1103]: cali993f08f68af: Gained carrier Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:16.962 [INFO][2808] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0 nginx-deployment-8ffc5cf85- default b02c0b3a-13c2-4298-99e1-5776ed917536 964 0 2024-02-09 18:36:00 +0000 UTC map[app:nginx pod-template-hash:8ffc5cf85 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.94 nginx-deployment-8ffc5cf85-q8gfg eth0 default [] [] [kns.default ksa.default.default] cali993f08f68af [] []}} ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Namespace="default" Pod="nginx-deployment-8ffc5cf85-q8gfg" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:16.962 [INFO][2808] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Namespace="default" Pod="nginx-deployment-8ffc5cf85-q8gfg" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:16.983 [INFO][2818] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" HandleID="k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:16.995 [INFO][2818] ipam_plugin.go 268: Auto assigning IP ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" HandleID="k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000222080), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.94", "pod":"nginx-deployment-8ffc5cf85-q8gfg", "timestamp":"2024-02-09 18:36:16.983106497 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:16.995 [INFO][2818] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:16.995 [INFO][2818] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:16.995 [INFO][2818] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:16.997 [INFO][2818] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.000 [INFO][2818] ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.004 [INFO][2818] ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.006 [INFO][2818] ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.008 [INFO][2818] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.008 [INFO][2818] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.009 [INFO][2818] ipam.go 1682: Creating new handle: k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.012 [INFO][2818] ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.016 [INFO][2818] ipam.go 1216: Successfully claimed IPs: [192.168.24.2/26] block=192.168.24.0/26 handle="k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.016 [INFO][2818] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.2/26] handle="k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" host="10.0.0.94" Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.016 [INFO][2818] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:17.028926 env[1218]: 2024-02-09 18:36:17.017 [INFO][2818] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.24.2/26] IPv6=[] ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" HandleID="k8s-pod-network.fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:17.029493 env[1218]: 2024-02-09 18:36:17.018 [INFO][2808] k8s.go 385: Populated endpoint ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Namespace="default" Pod="nginx-deployment-8ffc5cf85-q8gfg" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"b02c0b3a-13c2-4298-99e1-5776ed917536", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"nginx-deployment-8ffc5cf85-q8gfg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali993f08f68af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:17.029493 env[1218]: 2024-02-09 18:36:17.018 [INFO][2808] k8s.go 386: Calico CNI using IPs: [192.168.24.2/32] ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Namespace="default" Pod="nginx-deployment-8ffc5cf85-q8gfg" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:17.029493 env[1218]: 2024-02-09 18:36:17.018 [INFO][2808] dataplane_linux.go 68: Setting the host side veth name to cali993f08f68af ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Namespace="default" Pod="nginx-deployment-8ffc5cf85-q8gfg" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:17.029493 env[1218]: 2024-02-09 18:36:17.021 [INFO][2808] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Namespace="default" Pod="nginx-deployment-8ffc5cf85-q8gfg" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:17.029493 env[1218]: 2024-02-09 18:36:17.021 [INFO][2808] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Namespace="default" Pod="nginx-deployment-8ffc5cf85-q8gfg" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"b02c0b3a-13c2-4298-99e1-5776ed917536", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a", Pod:"nginx-deployment-8ffc5cf85-q8gfg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali993f08f68af", MAC:"3e:dc:f9:09:63:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:17.029493 env[1218]: 2024-02-09 18:36:17.027 [INFO][2808] k8s.go 491: Wrote updated endpoint to datastore ContainerID="fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a" Namespace="default" Pod="nginx-deployment-8ffc5cf85-q8gfg" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:17.038000 audit[2844]: NETFILTER_CFG table=filter:88 family=2 entries=40 op=nft_register_chain pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:17.038000 audit[2844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21064 a0=3 a1=fffff7b81210 a2=0 a3=ffffa04b6fa8 items=0 ppid=2352 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:17.044141 kernel: audit: type=1325 audit(1707503777.038:263): table=filter:88 family=2 entries=40 op=nft_register_chain pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:17.044211 kernel: audit: type=1300 audit(1707503777.038:263): arch=c00000b7 syscall=211 success=yes exit=21064 a0=3 a1=fffff7b81210 a2=0 a3=ffffa04b6fa8 items=0 ppid=2352 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:17.044235 kernel: audit: type=1327 audit(1707503777.038:263): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:17.038000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:17.046294 env[1218]: time="2024-02-09T18:36:17.046229361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:17.046447 env[1218]: time="2024-02-09T18:36:17.046424476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:17.046528 env[1218]: time="2024-02-09T18:36:17.046506754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:17.046773 env[1218]: time="2024-02-09T18:36:17.046742428Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a pid=2852 runtime=io.containerd.runc.v2 Feb 9 18:36:17.087482 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:36:17.104323 env[1218]: time="2024-02-09T18:36:17.104284373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-q8gfg,Uid:b02c0b3a-13c2-4298-99e1-5776ed917536,Namespace:default,Attempt:1,} returns sandbox id \"fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a\"" Feb 9 18:36:17.106098 env[1218]: time="2024-02-09T18:36:17.106045328Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:36:17.731236 kubelet[1561]: E0209 18:36:17.731191 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:17.921373 systemd[1]: run-containerd-runc-k8s.io-fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a-runc.3g8qDh.mount: Deactivated successfully. Feb 9 18:36:18.731811 kubelet[1561]: E0209 18:36:18.731765 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:18.918341 systemd-networkd[1103]: cali993f08f68af: Gained IPv6LL Feb 9 18:36:19.231465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377184885.mount: Deactivated successfully. Feb 9 18:36:19.733333 kubelet[1561]: E0209 18:36:19.733281 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:20.007906 env[1218]: time="2024-02-09T18:36:20.007836141Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:20.011808 env[1218]: time="2024-02-09T18:36:20.011776894Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:20.014088 env[1218]: time="2024-02-09T18:36:20.013851327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:20.015743 env[1218]: time="2024-02-09T18:36:20.015714446Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:20.016446 env[1218]: time="2024-02-09T18:36:20.016415910Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 18:36:20.018590 env[1218]: time="2024-02-09T18:36:20.018555383Z" level=info msg="CreateContainer within sandbox \"fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 18:36:20.031219 env[1218]: time="2024-02-09T18:36:20.031155222Z" level=info msg="CreateContainer within sandbox \"fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"51befb7f37876990032d2caec35e4705eba04f02f585cfb034debe9012a2b4d1\"" Feb 9 18:36:20.032844 env[1218]: time="2024-02-09T18:36:20.032800825Z" level=info msg="StartContainer for \"51befb7f37876990032d2caec35e4705eba04f02f585cfb034debe9012a2b4d1\"" Feb 9 18:36:20.088352 env[1218]: time="2024-02-09T18:36:20.088303348Z" level=info msg="StartContainer for \"51befb7f37876990032d2caec35e4705eba04f02f585cfb034debe9012a2b4d1\" returns successfully" Feb 9 18:36:20.734306 kubelet[1561]: E0209 18:36:20.734255 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:20.959454 kubelet[1561]: I0209 18:36:20.959415 1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-q8gfg" podStartSLOduration=-9.223372015895397e+09 pod.CreationTimestamp="2024-02-09 18:36:00 +0000 UTC" firstStartedPulling="2024-02-09 18:36:17.105503222 +0000 UTC m=+47.681187354" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:20.958896064 +0000 UTC m=+51.534580196" watchObservedRunningTime="2024-02-09 18:36:20.959379333 +0000 UTC m=+51.535063465" Feb 9 18:36:21.735215 kubelet[1561]: E0209 18:36:21.735171 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:22.736087 kubelet[1561]: E0209 18:36:22.736051 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:23.736825 kubelet[1561]: E0209 18:36:23.736782 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:24.737528 kubelet[1561]: E0209 18:36:24.737473 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:25.738554 kubelet[1561]: E0209 18:36:25.738511 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:26.738765 kubelet[1561]: E0209 18:36:26.738723 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:27.192000 audit[2979]: NETFILTER_CFG table=filter:89 family=2 entries=18 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.192000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffe927b520 a2=0 a3=ffff96ef46c0 items=0 ppid=1778 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.196671 kubelet[1561]: I0209 18:36:27.196637 1561 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:36:27.197728 kernel: audit: type=1325 audit(1707503787.192:264): table=filter:89 family=2 entries=18 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.197786 kernel: audit: type=1300 audit(1707503787.192:264): arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffe927b520 a2=0 a3=ffff96ef46c0 items=0 ppid=1778 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.197807 kernel: audit: type=1327 audit(1707503787.192:264): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.193000 audit[2979]: NETFILTER_CFG table=nat:90 family=2 entries=78 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.193000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe927b520 a2=0 a3=ffff96ef46c0 items=0 ppid=1778 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.205556 kernel: audit: type=1325 audit(1707503787.193:265): table=nat:90 family=2 entries=78 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.205633 kernel: audit: type=1300 audit(1707503787.193:265): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe927b520 a2=0 a3=ffff96ef46c0 items=0 ppid=1778 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.205679 kernel: audit: type=1327 audit(1707503787.193:265): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.236000 audit[3005]: NETFILTER_CFG table=filter:91 family=2 entries=30 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.236000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffdbf6ca60 a2=0 a3=ffffa3a536c0 items=0 ppid=1778 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.241563 kernel: audit: type=1325 audit(1707503787.236:266): table=filter:91 family=2 entries=30 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.241608 kernel: audit: type=1300 audit(1707503787.236:266): arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffdbf6ca60 a2=0 a3=ffffa3a536c0 items=0 ppid=1778 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.241637 kernel: audit: type=1327 audit(1707503787.236:266): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.238000 audit[3005]: NETFILTER_CFG table=nat:92 family=2 entries=78 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.238000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffdbf6ca60 a2=0 a3=ffffa3a536c0 items=0 ppid=1778 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:27.245880 kernel: audit: type=1325 audit(1707503787.238:267): table=nat:92 family=2 entries=78 op=nft_register_rule pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:27.305276 kubelet[1561]: I0209 18:36:27.305225 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5c04b148-3769-495e-809d-e62f09e6577f-data\") pod \"nfs-server-provisioner-0\" (UID: \"5c04b148-3769-495e-809d-e62f09e6577f\") " pod="default/nfs-server-provisioner-0" Feb 9 18:36:27.305276 kubelet[1561]: I0209 18:36:27.305281 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klr7z\" (UniqueName: \"kubernetes.io/projected/5c04b148-3769-495e-809d-e62f09e6577f-kube-api-access-klr7z\") pod \"nfs-server-provisioner-0\" (UID: \"5c04b148-3769-495e-809d-e62f09e6577f\") " pod="default/nfs-server-provisioner-0" Feb 9 18:36:27.501966 env[1218]: time="2024-02-09T18:36:27.501831196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5c04b148-3769-495e-809d-e62f09e6577f,Namespace:default,Attempt:0,}" Feb 9 18:36:27.605792 systemd-networkd[1103]: cali60e51b789ff: Link UP Feb 9 18:36:27.607457 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:36:27.607504 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali60e51b789ff: link becomes ready Feb 9 18:36:27.607735 systemd-networkd[1103]: cali60e51b789ff: Gained carrier Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.544 [INFO][3007] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5c04b148-3769-495e-809d-e62f09e6577f 1015 0 2024-02-09 18:36:27 +0000 UTC map[app:nfs-server-provisioner chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.94 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.544 [INFO][3007] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.566 [INFO][3021] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" HandleID="k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Workload="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.579 [INFO][3021] ipam_plugin.go 268: Auto assigning IP ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" HandleID="k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Workload="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000cdab0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.94", "pod":"nfs-server-provisioner-0", "timestamp":"2024-02-09 18:36:27.566788719 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.579 [INFO][3021] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.579 [INFO][3021] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.580 [INFO][3021] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.581 [INFO][3021] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.584 [INFO][3021] ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.588 [INFO][3021] ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.590 [INFO][3021] ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.591 [INFO][3021] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.592 [INFO][3021] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.593 [INFO][3021] ipam.go 1682: Creating new handle: k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.596 [INFO][3021] ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.601 [INFO][3021] ipam.go 1216: Successfully claimed IPs: [192.168.24.3/26] block=192.168.24.0/26 handle="k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.601 [INFO][3021] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.3/26] handle="k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" host="10.0.0.94" Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.601 [INFO][3021] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:27.622977 env[1218]: 2024-02-09 18:36:27.601 [INFO][3021] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.24.3/26] IPv6=[] ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" HandleID="k8s-pod-network.ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Workload="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Feb 9 18:36:27.623512 env[1218]: 2024-02-09 18:36:27.603 [INFO][3007] k8s.go 385: Populated endpoint ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5c04b148-3769-495e-809d-e62f09e6577f", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:27.623512 env[1218]: 2024-02-09 18:36:27.603 [INFO][3007] k8s.go 386: Calico CNI using IPs: [192.168.24.3/32] ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Feb 9 18:36:27.623512 env[1218]: 2024-02-09 18:36:27.603 [INFO][3007] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Feb 9 18:36:27.623512 env[1218]: 2024-02-09 18:36:27.608 [INFO][3007] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Feb 9 18:36:27.623648 env[1218]: 2024-02-09 18:36:27.608 [INFO][3007] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5c04b148-3769-495e-809d-e62f09e6577f", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f6:b0:b1:d4:37:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:27.623648 env[1218]: 2024-02-09 18:36:27.621 [INFO][3007] k8s.go 491: Wrote updated endpoint to datastore ContainerID="ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Feb 9 18:36:27.638440 env[1218]: time="2024-02-09T18:36:27.638345649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:27.638440 env[1218]: time="2024-02-09T18:36:27.638402888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:27.638624 env[1218]: time="2024-02-09T18:36:27.638413408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:27.638796 env[1218]: time="2024-02-09T18:36:27.638754882Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c pid=3053 runtime=io.containerd.runc.v2 Feb 9 18:36:27.638000 audit[3055]: NETFILTER_CFG table=filter:93 family=2 entries=38 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:27.638000 audit[3055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19500 a0=3 a1=ffffe43541e0 a2=0 a3=ffff981a3fa8 items=0 ppid=2352 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:27.638000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:27.679269 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:36:27.698429 env[1218]: time="2024-02-09T18:36:27.698369498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5c04b148-3769-495e-809d-e62f09e6577f,Namespace:default,Attempt:0,} returns sandbox id \"ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c\"" Feb 9 18:36:27.700004 env[1218]: time="2024-02-09T18:36:27.699975830Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 18:36:27.739214 kubelet[1561]: E0209 18:36:27.739169 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:28.418788 systemd[1]: run-containerd-runc-k8s.io-ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c-runc.P84BAR.mount: Deactivated successfully. Feb 9 18:36:28.646068 systemd-networkd[1103]: cali60e51b789ff: Gained IPv6LL Feb 9 18:36:28.739825 kubelet[1561]: E0209 18:36:28.739520 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:29.707441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425135693.mount: Deactivated successfully. Feb 9 18:36:29.740318 kubelet[1561]: E0209 18:36:29.740282 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:30.224932 kubelet[1561]: E0209 18:36:30.224616 1561 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 18:36:30.700209 kubelet[1561]: E0209 18:36:30.700162 1561 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:30.706967 env[1218]: time="2024-02-09T18:36:30.706929658Z" level=info msg="StopPodSandbox for \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\"" Feb 9 18:36:30.742886 kubelet[1561]: E0209 18:36:30.741133 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.743 [WARNING][3129] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-csi--node--driver--6gm2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d972391-7603-4ba7-9c5f-70ab2777349a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a", Pod:"csi-node-driver-6gm2m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83bc453dce9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.743 [INFO][3129] k8s.go 578: Cleaning up netns ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.743 [INFO][3129] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" iface="eth0" netns="" Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.744 [INFO][3129] k8s.go 585: Releasing IP address(es) ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.744 [INFO][3129] utils.go 188: Calico CNI releasing IP address ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.781 [INFO][3136] ipam_plugin.go 415: Releasing address using handleID ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.781 [INFO][3136] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.781 [INFO][3136] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.790 [WARNING][3136] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.790 [INFO][3136] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.792 [INFO][3136] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:30.794105 env[1218]: 2024-02-09 18:36:30.793 [INFO][3129] k8s.go 591: Teardown processing complete. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:30.794571 env[1218]: time="2024-02-09T18:36:30.794138616Z" level=info msg="TearDown network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\" successfully" Feb 9 18:36:30.794571 env[1218]: time="2024-02-09T18:36:30.794171415Z" level=info msg="StopPodSandbox for \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\" returns successfully" Feb 9 18:36:30.795084 env[1218]: time="2024-02-09T18:36:30.795056921Z" level=info msg="RemovePodSandbox for \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\"" Feb 9 18:36:30.795148 env[1218]: time="2024-02-09T18:36:30.795093481Z" level=info msg="Forcibly stopping sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\"" Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.835 [WARNING][3161] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-csi--node--driver--6gm2m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d972391-7603-4ba7-9c5f-70ab2777349a", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 35, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"beea1306767ff68810fba6d8ee116ca2c421e00042a757212dc51cdcd4c51a6a", Pod:"csi-node-driver-6gm2m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali83bc453dce9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.835 [INFO][3161] k8s.go 578: Cleaning up netns ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.836 [INFO][3161] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" iface="eth0" netns="" Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.836 [INFO][3161] k8s.go 585: Releasing IP address(es) ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.836 [INFO][3161] utils.go 188: Calico CNI releasing IP address ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.856 [INFO][3170] ipam_plugin.go 415: Releasing address using handleID ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.856 [INFO][3170] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.856 [INFO][3170] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.866 [WARNING][3170] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.866 [INFO][3170] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" HandleID="k8s-pod-network.7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Workload="10.0.0.94-k8s-csi--node--driver--6gm2m-eth0" Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.867 [INFO][3170] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:30.869706 env[1218]: 2024-02-09 18:36:30.868 [INFO][3161] k8s.go 591: Teardown processing complete. ContainerID="7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca" Feb 9 18:36:30.870174 env[1218]: time="2024-02-09T18:36:30.869778355Z" level=info msg="TearDown network for sandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\" successfully" Feb 9 18:36:30.943115 env[1218]: time="2024-02-09T18:36:30.943063890Z" level=info msg="RemovePodSandbox \"7b0bb4fcaf0974d437405ca919e605ab114d9c8a4bf31bc74b5d4fcb7851fdca\" returns successfully" Feb 9 18:36:30.943627 env[1218]: time="2024-02-09T18:36:30.943577242Z" level=info msg="StopPodSandbox for \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\"" Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:30.977 [WARNING][3193] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"b02c0b3a-13c2-4298-99e1-5776ed917536", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a", Pod:"nginx-deployment-8ffc5cf85-q8gfg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali993f08f68af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:30.977 [INFO][3193] k8s.go 578: Cleaning up netns ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:30.977 [INFO][3193] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" iface="eth0" netns="" Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:30.977 [INFO][3193] k8s.go 585: Releasing IP address(es) ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:30.977 [INFO][3193] utils.go 188: Calico CNI releasing IP address ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:30.996 [INFO][3201] ipam_plugin.go 415: Releasing address using handleID ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:30.996 [INFO][3201] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:30.996 [INFO][3201] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:31.005 [WARNING][3201] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:31.005 [INFO][3201] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:31.007 [INFO][3201] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:31.008939 env[1218]: 2024-02-09 18:36:31.008 [INFO][3193] k8s.go 591: Teardown processing complete. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:31.009393 env[1218]: time="2024-02-09T18:36:31.008978105Z" level=info msg="TearDown network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\" successfully" Feb 9 18:36:31.009393 env[1218]: time="2024-02-09T18:36:31.009011344Z" level=info msg="StopPodSandbox for \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\" returns successfully" Feb 9 18:36:31.009444 env[1218]: time="2024-02-09T18:36:31.009397138Z" level=info msg="RemovePodSandbox for \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\"" Feb 9 18:36:31.009468 env[1218]: time="2024-02-09T18:36:31.009438258Z" level=info msg="Forcibly stopping sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\"" Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.044 [WARNING][3224] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0", GenerateName:"nginx-deployment-8ffc5cf85-", Namespace:"default", SelfLink:"", UID:"b02c0b3a-13c2-4298-99e1-5776ed917536", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8ffc5cf85", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"fb44908a06e4ca38e75fe468c3f831fb3429cde55bb63db054396d09c5a8c65a", Pod:"nginx-deployment-8ffc5cf85-q8gfg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali993f08f68af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.044 [INFO][3224] k8s.go 578: Cleaning up netns ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.045 [INFO][3224] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" iface="eth0" netns="" Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.045 [INFO][3224] k8s.go 585: Releasing IP address(es) ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.045 [INFO][3224] utils.go 188: Calico CNI releasing IP address ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.064 [INFO][3232] ipam_plugin.go 415: Releasing address using handleID ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.064 [INFO][3232] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.064 [INFO][3232] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.073 [WARNING][3232] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.073 [INFO][3232] ipam_plugin.go 443: Releasing address using workloadID ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" HandleID="k8s-pod-network.4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Workload="10.0.0.94-k8s-nginx--deployment--8ffc5cf85--q8gfg-eth0" Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.074 [INFO][3232] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:31.076506 env[1218]: 2024-02-09 18:36:31.075 [INFO][3224] k8s.go 591: Teardown processing complete. ContainerID="4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e" Feb 9 18:36:31.076990 env[1218]: time="2024-02-09T18:36:31.076528161Z" level=info msg="TearDown network for sandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\" successfully" Feb 9 18:36:31.079438 env[1218]: time="2024-02-09T18:36:31.079403238Z" level=info msg="RemovePodSandbox \"4f7bd5568610505364f8f75d918fe9c565eb0449df9e34277ef8396dbd49f67e\" returns successfully" Feb 9 18:36:31.124530 kubelet[1561]: I0209 18:36:31.124500 1561 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:36:31.138000 audit[3264]: NETFILTER_CFG table=filter:94 family=2 entries=31 op=nft_register_rule pid=3264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.138000 audit[3264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffd0db5db0 a2=0 a3=ffffb5eb36c0 items=0 ppid=1778 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.138000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.139000 audit[3264]: NETFILTER_CFG table=nat:95 family=2 entries=78 op=nft_register_rule pid=3264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.139000 audit[3264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd0db5db0 a2=0 a3=ffffb5eb36c0 items=0 ppid=1778 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.139000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.175000 audit[3290]: NETFILTER_CFG table=filter:96 family=2 entries=32 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.175000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffe4bbc7e0 a2=0 a3=ffff98c346c0 items=0 ppid=1778 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.177000 audit[3290]: NETFILTER_CFG table=nat:97 family=2 entries=78 op=nft_register_rule pid=3290 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:31.177000 audit[3290]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffe4bbc7e0 a2=0 a3=ffff98c346c0 items=0 ppid=1778 pid=3290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:31.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:31.226326 kubelet[1561]: I0209 18:36:31.226255 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz5k6\" (UniqueName: \"kubernetes.io/projected/dddff118-c48d-4361-8b4f-380e905702a1-kube-api-access-mz5k6\") pod \"calico-apiserver-7c96ff74b7-xjvks\" (UID: \"dddff118-c48d-4361-8b4f-380e905702a1\") " pod="calico-apiserver/calico-apiserver-7c96ff74b7-xjvks" Feb 9 18:36:31.226326 kubelet[1561]: I0209 18:36:31.226305 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dddff118-c48d-4361-8b4f-380e905702a1-calico-apiserver-certs\") pod \"calico-apiserver-7c96ff74b7-xjvks\" (UID: \"dddff118-c48d-4361-8b4f-380e905702a1\") " pod="calico-apiserver/calico-apiserver-7c96ff74b7-xjvks" Feb 9 18:36:31.327155 kubelet[1561]: E0209 18:36:31.326839 1561 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 18:36:31.327155 kubelet[1561]: E0209 18:36:31.326933 1561 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dddff118-c48d-4361-8b4f-380e905702a1-calico-apiserver-certs podName:dddff118-c48d-4361-8b4f-380e905702a1 nodeName:}" failed. No retries permitted until 2024-02-09 18:36:31.826913487 +0000 UTC m=+62.402597619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dddff118-c48d-4361-8b4f-380e905702a1-calico-apiserver-certs") pod "calico-apiserver-7c96ff74b7-xjvks" (UID: "dddff118-c48d-4361-8b4f-380e905702a1") : secret "calico-apiserver-certs" not found Feb 9 18:36:31.437414 env[1218]: time="2024-02-09T18:36:31.437357333Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:31.439774 env[1218]: time="2024-02-09T18:36:31.439731657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:31.441352 env[1218]: time="2024-02-09T18:36:31.441303633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:31.443208 env[1218]: time="2024-02-09T18:36:31.443179605Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:31.443867 env[1218]: time="2024-02-09T18:36:31.443819475Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 18:36:31.446038 env[1218]: time="2024-02-09T18:36:31.446005922Z" level=info msg="CreateContainer within sandbox \"ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 18:36:31.454667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586154425.mount: Deactivated successfully. Feb 9 18:36:31.462809 env[1218]: time="2024-02-09T18:36:31.462747108Z" level=info msg="CreateContainer within sandbox \"ddc6c1a74e054b576d2bc3923f1d4242e663f5cfd0cd4356329e1cba5bfafa2c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4f1deb2764424819fe1eeeae7aff66570b7a0e65a6689d7d618706b65f0cb757\"" Feb 9 18:36:31.463260 env[1218]: time="2024-02-09T18:36:31.463236461Z" level=info msg="StartContainer for \"4f1deb2764424819fe1eeeae7aff66570b7a0e65a6689d7d618706b65f0cb757\"" Feb 9 18:36:31.525793 env[1218]: time="2024-02-09T18:36:31.525742673Z" level=info msg="StartContainer for \"4f1deb2764424819fe1eeeae7aff66570b7a0e65a6689d7d618706b65f0cb757\" returns successfully" Feb 9 18:36:31.741750 kubelet[1561]: E0209 18:36:31.741611 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:31.983567 kubelet[1561]: I0209 18:36:31.983509 1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372031871304e+09 pod.CreationTimestamp="2024-02-09 18:36:27 +0000 UTC" firstStartedPulling="2024-02-09 18:36:27.699448679 +0000 UTC m=+58.275132811" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:31.982616989 +0000 UTC m=+62.558301161" watchObservedRunningTime="2024-02-09 18:36:31.983473016 +0000 UTC m=+62.559157148" Feb 9 18:36:32.027968 env[1218]: time="2024-02-09T18:36:32.027927273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c96ff74b7-xjvks,Uid:dddff118-c48d-4361-8b4f-380e905702a1,Namespace:calico-apiserver,Attempt:0,}" Feb 9 18:36:32.139646 systemd-networkd[1103]: calif696a446098: Link UP Feb 9 18:36:32.140886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:36:32.140949 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif696a446098: link becomes ready Feb 9 18:36:32.141163 systemd-networkd[1103]: calif696a446098: Gained carrier Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.077 [INFO][3353] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0 calico-apiserver-7c96ff74b7- calico-apiserver dddff118-c48d-4361-8b4f-380e905702a1 1079 0 2024-02-09 18:36:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c96ff74b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.94 calico-apiserver-7c96ff74b7-xjvks eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif696a446098 [] []}} ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-xjvks" WorkloadEndpoint="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.077 [INFO][3353] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-xjvks" WorkloadEndpoint="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.099 [INFO][3367] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" HandleID="k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Workload="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.114 [INFO][3367] ipam_plugin.go 268: Auto assigning IP ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" HandleID="k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Workload="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dab50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.94", "pod":"calico-apiserver-7c96ff74b7-xjvks", "timestamp":"2024-02-09 18:36:32.099339462 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.115 [INFO][3367] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.115 [INFO][3367] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.115 [INFO][3367] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.116 [INFO][3367] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.120 [INFO][3367] ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.123 [INFO][3367] ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.125 [INFO][3367] ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.127 [INFO][3367] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.127 [INFO][3367] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.129 [INFO][3367] ipam.go 1682: Creating new handle: k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.132 [INFO][3367] ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.135 [INFO][3367] ipam.go 1216: Successfully claimed IPs: [192.168.24.4/26] block=192.168.24.0/26 handle="k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.136 [INFO][3367] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.4/26] handle="k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" host="10.0.0.94" Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.136 [INFO][3367] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:32.155572 env[1218]: 2024-02-09 18:36:32.136 [INFO][3367] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.24.4/26] IPv6=[] ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" HandleID="k8s-pod-network.b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Workload="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" Feb 9 18:36:32.156166 env[1218]: 2024-02-09 18:36:32.138 [INFO][3353] k8s.go 385: Populated endpoint ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-xjvks" WorkloadEndpoint="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0", GenerateName:"calico-apiserver-7c96ff74b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"dddff118-c48d-4361-8b4f-380e905702a1", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c96ff74b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"calico-apiserver-7c96ff74b7-xjvks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif696a446098", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:32.156166 env[1218]: 2024-02-09 18:36:32.138 [INFO][3353] k8s.go 386: Calico CNI using IPs: [192.168.24.4/32] ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-xjvks" WorkloadEndpoint="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" Feb 9 18:36:32.156166 env[1218]: 2024-02-09 18:36:32.138 [INFO][3353] dataplane_linux.go 68: Setting the host side veth name to calif696a446098 ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-xjvks" WorkloadEndpoint="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" Feb 9 18:36:32.156166 env[1218]: 2024-02-09 18:36:32.140 [INFO][3353] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-xjvks" WorkloadEndpoint="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" Feb 9 18:36:32.156166 env[1218]: 2024-02-09 18:36:32.141 [INFO][3353] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-xjvks" WorkloadEndpoint="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0", GenerateName:"calico-apiserver-7c96ff74b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"dddff118-c48d-4361-8b4f-380e905702a1", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c96ff74b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f", Pod:"calico-apiserver-7c96ff74b7-xjvks", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif696a446098", MAC:"02:d6:6b:bc:d9:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:32.156166 env[1218]: 2024-02-09 18:36:32.153 [INFO][3353] k8s.go 491: Wrote updated endpoint to datastore ContainerID="b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f" Namespace="calico-apiserver" Pod="calico-apiserver-7c96ff74b7-xjvks" WorkloadEndpoint="10.0.0.94-k8s-calico--apiserver--7c96ff74b7--xjvks-eth0" Feb 9 18:36:32.171255 env[1218]: time="2024-02-09T18:36:32.171181764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:32.171403 env[1218]: time="2024-02-09T18:36:32.171263123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:32.171403 env[1218]: time="2024-02-09T18:36:32.171290402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:32.171506 env[1218]: time="2024-02-09T18:36:32.171465920Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f pid=3401 runtime=io.containerd.runc.v2 Feb 9 18:36:32.171000 audit[3407]: NETFILTER_CFG table=filter:98 family=2 entries=55 op=nft_register_chain pid=3407 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:32.171000 audit[3407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28104 a0=3 a1=ffffd1f6eaf0 a2=0 a3=ffffa7441fa8 items=0 ppid=2352 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.171000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:32.217000 audit[3455]: NETFILTER_CFG table=filter:99 family=2 entries=20 op=nft_register_rule pid=3455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:32.224757 kernel: kauditd_printk_skb: 20 callbacks suppressed Feb 9 18:36:32.224802 kernel: audit: type=1325 audit(1707503792.217:274): table=filter:99 family=2 entries=20 op=nft_register_rule pid=3455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:32.224842 kernel: audit: type=1300 audit(1707503792.217:274): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe6ea2dd0 a2=0 a3=ffff9a6926c0 items=0 ppid=1778 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.224898 kernel: audit: type=1327 audit(1707503792.217:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:32.217000 audit[3455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffe6ea2dd0 a2=0 a3=ffff9a6926c0 items=0 ppid=1778 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:32.227384 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:36:32.217000 audit[3455]: NETFILTER_CFG table=nat:100 family=2 entries=162 op=nft_register_chain pid=3455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:32.217000 audit[3455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffe6ea2dd0 a2=0 a3=ffff9a6926c0 items=0 ppid=1778 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.235273 kernel: audit: type=1325 audit(1707503792.217:275): table=nat:100 family=2 entries=162 op=nft_register_chain pid=3455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:32.235377 kernel: audit: type=1300 audit(1707503792.217:275): arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffe6ea2dd0 a2=0 a3=ffff9a6926c0 items=0 ppid=1778 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:32.235411 kernel: audit: type=1327 audit(1707503792.217:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:32.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:32.251643 env[1218]: time="2024-02-09T18:36:32.251600700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c96ff74b7-xjvks,Uid:dddff118-c48d-4361-8b4f-380e905702a1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f\"" Feb 9 18:36:32.252983 env[1218]: time="2024-02-09T18:36:32.252952680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 18:36:32.742027 kubelet[1561]: E0209 18:36:32.741986 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:33.638107 systemd-networkd[1103]: calif696a446098: Gained IPv6LL Feb 9 18:36:33.742761 kubelet[1561]: E0209 18:36:33.742720 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:34.391220 env[1218]: time="2024-02-09T18:36:34.391172666Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.393236 env[1218]: time="2024-02-09T18:36:34.393199358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.394698 env[1218]: time="2024-02-09T18:36:34.394664098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.396151 env[1218]: time="2024-02-09T18:36:34.396129197Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:34.396593 env[1218]: time="2024-02-09T18:36:34.396557951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 18:36:34.398195 env[1218]: time="2024-02-09T18:36:34.398163849Z" level=info msg="CreateContainer within sandbox \"b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 18:36:34.405845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528265953.mount: Deactivated successfully. Feb 9 18:36:34.407579 env[1218]: time="2024-02-09T18:36:34.407539998Z" level=info msg="CreateContainer within sandbox \"b82efcd081e6e77b52b38ee6ad79383b64011c6f522b34041aefa4ceecdd524f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b73a805de4ba695af6de546353a0ef97b49b25e68c72bbd4baa39bb0c880d5f1\"" Feb 9 18:36:34.408007 env[1218]: time="2024-02-09T18:36:34.407956152Z" level=info msg="StartContainer for \"b73a805de4ba695af6de546353a0ef97b49b25e68c72bbd4baa39bb0c880d5f1\"" Feb 9 18:36:34.467169 env[1218]: time="2024-02-09T18:36:34.467126447Z" level=info msg="StartContainer for \"b73a805de4ba695af6de546353a0ef97b49b25e68c72bbd4baa39bb0c880d5f1\" returns successfully" Feb 9 18:36:34.742953 kubelet[1561]: E0209 18:36:34.742832 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:34.991834 kubelet[1561]: I0209 18:36:34.991799 1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c96ff74b7-xjvks" podStartSLOduration=-9.223372032863018e+09 pod.CreationTimestamp="2024-02-09 18:36:31 +0000 UTC" firstStartedPulling="2024-02-09 18:36:32.252602885 +0000 UTC m=+62.828286977" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:34.991388899 +0000 UTC m=+65.567073111" watchObservedRunningTime="2024-02-09 18:36:34.991757574 +0000 UTC m=+65.567441706" Feb 9 18:36:35.036000 audit[3526]: NETFILTER_CFG table=filter:101 family=2 entries=8 op=nft_register_rule pid=3526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.036000 audit[3526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffff7fe2cb0 a2=0 a3=ffffabbe06c0 items=0 ppid=1778 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.041294 kernel: audit: type=1325 audit(1707503795.036:276): table=filter:101 family=2 entries=8 op=nft_register_rule pid=3526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.041353 kernel: audit: type=1300 audit(1707503795.036:276): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffff7fe2cb0 a2=0 a3=ffffabbe06c0 items=0 ppid=1778 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.041375 kernel: audit: type=1327 audit(1707503795.036:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:35.036000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:35.038000 audit[3526]: NETFILTER_CFG table=nat:102 family=2 entries=198 op=nft_register_rule pid=3526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.038000 audit[3526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=fffff7fe2cb0 a2=0 a3=ffffabbe06c0 items=0 ppid=1778 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.045914 kernel: audit: type=1325 audit(1707503795.038:277): table=nat:102 family=2 entries=198 op=nft_register_rule pid=3526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:35.358000 audit[3552]: NETFILTER_CFG table=filter:103 family=2 entries=8 op=nft_register_rule pid=3552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.358000 audit[3552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffd7e7de30 a2=0 a3=ffffb461c6c0 items=0 ppid=1778 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:35.361000 audit[3552]: NETFILTER_CFG table=nat:104 family=2 entries=198 op=nft_register_rule pid=3552 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:36:35.361000 audit[3552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffd7e7de30 a2=0 a3=ffffb461c6c0 items=0 ppid=1778 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:35.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:36:35.743937 kubelet[1561]: E0209 18:36:35.743828 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:36.745155 kubelet[1561]: E0209 18:36:36.745092 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:37.745775 kubelet[1561]: E0209 18:36:37.745740 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:38.746483 kubelet[1561]: E0209 18:36:38.746444 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:39.746728 kubelet[1561]: E0209 18:36:39.746677 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:40.747539 kubelet[1561]: E0209 18:36:40.747504 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:41.748520 kubelet[1561]: E0209 18:36:41.748478 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:41.868114 kubelet[1561]: I0209 18:36:41.868087 1561 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:36:41.978415 kubelet[1561]: I0209 18:36:41.978373 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6416206-704b-470d-a368-e8452c1d491a\" (UniqueName: \"kubernetes.io/nfs/f1b594e1-701b-4b6e-aeda-73ffba65019c-pvc-b6416206-704b-470d-a368-e8452c1d491a\") pod \"test-pod-1\" (UID: \"f1b594e1-701b-4b6e-aeda-73ffba65019c\") " pod="default/test-pod-1" Feb 9 18:36:41.978415 kubelet[1561]: I0209 18:36:41.978423 1561 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlsk\" (UniqueName: \"kubernetes.io/projected/f1b594e1-701b-4b6e-aeda-73ffba65019c-kube-api-access-7nlsk\") pod \"test-pod-1\" (UID: \"f1b594e1-701b-4b6e-aeda-73ffba65019c\") " pod="default/test-pod-1" Feb 9 18:36:42.098000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.102017 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 18:36:42.102068 kernel: audit: type=1400 audit(1707503802.098:280): avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.098000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.106749 kernel: Failed to create system directory netfs Feb 9 18:36:42.106779 kernel: Failed to create system directory netfs Feb 9 18:36:42.106796 kernel: audit: type=1400 audit(1707503802.098:280): avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.106812 kernel: Failed to create system directory netfs Feb 9 18:36:42.106830 kernel: audit: type=1400 audit(1707503802.098:280): avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.098000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.107182 kernel: Failed to create system directory netfs Feb 9 18:36:42.109056 kernel: audit: type=1400 audit(1707503802.098:280): avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.098000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.112038 kernel: audit: type=1300 audit(1707503802.098:280): arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad4c225e0 a1=12c14 a2=aaaac5ebe028 a3=aaaad4c13010 items=0 ppid=62 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.098000 audit[3560]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad4c225e0 a1=12c14 a2=aaaac5ebe028 a3=aaaad4c13010 items=0 ppid=62 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.098000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 18:36:42.115204 kernel: audit: type=1327 audit(1707503802.098:280): proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.121718 kernel: Failed to create system directory fscache Feb 9 18:36:42.121769 kernel: audit: type=1400 audit(1707503802.114:281): avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.121789 kernel: Failed to create system directory fscache Feb 9 18:36:42.121802 kernel: audit: type=1400 audit(1707503802.114:281): avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.126343 kernel: Failed to create system directory fscache Feb 9 18:36:42.126368 kernel: audit: type=1400 audit(1707503802.114:281): avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.126392 kernel: Failed to create system directory fscache Feb 9 18:36:42.126411 kernel: audit: type=1400 audit(1707503802.114:281): avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.128658 kernel: Failed to create system directory fscache Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.129913 kernel: Failed to create system directory fscache Feb 9 18:36:42.129951 kernel: Failed to create system directory fscache Feb 9 18:36:42.129968 kernel: Failed to create system directory fscache Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.131179 kernel: Failed to create system directory fscache Feb 9 18:36:42.131207 kernel: Failed to create system directory fscache Feb 9 18:36:42.131223 kernel: Failed to create system directory fscache Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.132018 kernel: Failed to create system directory fscache Feb 9 18:36:42.132050 kernel: Failed to create system directory fscache Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.114000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.132879 kernel: Failed to create system directory fscache Feb 9 18:36:42.114000 audit[3560]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad4e35210 a1=4c344 a2=aaaac5ebe028 a3=aaaad4c13010 items=0 ppid=62 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.114000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 18:36:42.133878 kernel: FS-Cache: Loaded Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.153215 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.153280 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.153302 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.153323 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.154049 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.154082 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.154889 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.154915 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.156190 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.156216 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.156237 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.157045 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.157070 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.157902 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.157927 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.159184 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.159208 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.159225 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.160045 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.160069 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.160876 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.160900 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.162116 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.162140 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.162160 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.162936 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.162960 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.164180 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.164204 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.164217 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.165006 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.165029 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.166245 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.166273 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.166290 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.167070 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.167088 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.167911 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.167934 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.169155 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.169181 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.169199 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.169978 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.170001 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.171221 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.171243 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.171261 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.172053 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.172089 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.172878 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.172921 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.174130 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.174157 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.174175 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.174976 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.175000 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.176226 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.176251 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.176268 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.177086 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.177113 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.177944 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.177968 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.179251 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.179268 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.179289 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.180116 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.180142 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.180953 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.180975 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.182237 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.182263 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.182278 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.183099 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.183125 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.183947 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.183971 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.185298 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.185317 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.185337 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.190319 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190421 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190439 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190452 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190465 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190480 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190499 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190517 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190531 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190545 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190569 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.190589 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.191165 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.191193 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.191985 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.192013 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.193225 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.193256 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.193273 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.194066 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.194090 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.194886 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.194906 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.196135 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.196159 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.196175 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.196964 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.196986 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.198227 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.198245 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.198266 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.199057 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.199091 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.199876 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.199907 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.201114 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.201134 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.201152 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.201955 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.201977 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.203202 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.203224 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.203250 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.204078 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.204107 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.144000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.204929 kernel: Failed to create system directory sunrpc Feb 9 18:36:42.209902 kernel: RPC: Registered named UNIX socket transport module. Feb 9 18:36:42.209954 kernel: RPC: Registered udp transport module. Feb 9 18:36:42.209972 kernel: RPC: Registered tcp transport module. Feb 9 18:36:42.209985 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 18:36:42.144000 audit[3560]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad4e81560 a1=fbb6c a2=aaaac5ebe028 a3=aaaad4c13010 items=6 ppid=62 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.144000 audit: CWD cwd="/" Feb 9 18:36:42.144000 audit: PATH item=0 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:42.144000 audit: PATH item=1 name=(null) inode=18244 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:42.144000 audit: PATH item=2 name=(null) inode=18244 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:42.144000 audit: PATH item=3 name=(null) inode=18245 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:42.144000 audit: PATH item=4 name=(null) inode=18244 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:42.144000 audit: PATH item=5 name=(null) inode=18246 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:36:42.144000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.232122 kernel: Failed to create system directory nfs Feb 9 18:36:42.232166 kernel: Failed to create system directory nfs Feb 9 18:36:42.232181 kernel: Failed to create system directory nfs Feb 9 18:36:42.232193 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.232938 kernel: Failed to create system directory nfs Feb 9 18:36:42.232963 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.234151 kernel: Failed to create system directory nfs Feb 9 18:36:42.234179 kernel: Failed to create system directory nfs Feb 9 18:36:42.234195 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.234976 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.236228 kernel: Failed to create system directory nfs Feb 9 18:36:42.236256 kernel: Failed to create system directory nfs Feb 9 18:36:42.236269 kernel: Failed to create system directory nfs Feb 9 18:36:42.236287 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.237070 kernel: Failed to create system directory nfs Feb 9 18:36:42.237088 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.237881 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.239088 kernel: Failed to create system directory nfs Feb 9 18:36:42.239107 kernel: Failed to create system directory nfs Feb 9 18:36:42.239120 kernel: Failed to create system directory nfs Feb 9 18:36:42.239137 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.239906 kernel: Failed to create system directory nfs Feb 9 18:36:42.239926 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.241116 kernel: Failed to create system directory nfs Feb 9 18:36:42.241136 kernel: Failed to create system directory nfs Feb 9 18:36:42.241149 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.241938 kernel: Failed to create system directory nfs Feb 9 18:36:42.241961 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.243161 kernel: Failed to create system directory nfs Feb 9 18:36:42.243185 kernel: Failed to create system directory nfs Feb 9 18:36:42.243204 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.243990 kernel: Failed to create system directory nfs Feb 9 18:36:42.244009 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.245251 kernel: Failed to create system directory nfs Feb 9 18:36:42.245274 kernel: Failed to create system directory nfs Feb 9 18:36:42.245290 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.246113 kernel: Failed to create system directory nfs Feb 9 18:36:42.246132 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.248089 kernel: Failed to create system directory nfs Feb 9 18:36:42.248150 kernel: Failed to create system directory nfs Feb 9 18:36:42.248165 kernel: Failed to create system directory nfs Feb 9 18:36:42.248177 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.249451 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.250295 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.251013 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.252379 kernel: Failed to create system directory nfs Feb 9 18:36:42.252429 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.253264 kernel: Failed to create system directory nfs Feb 9 18:36:42.253933 kernel: Failed to create system directory nfs Feb 9 18:36:42.254605 kernel: Failed to create system directory nfs Feb 9 18:36:42.254666 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.256010 kernel: Failed to create system directory nfs Feb 9 18:36:42.256051 kernel: Failed to create system directory nfs Feb 9 18:36:42.256069 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.257219 kernel: Failed to create system directory nfs Feb 9 18:36:42.257261 kernel: Failed to create system directory nfs Feb 9 18:36:42.257277 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.258005 kernel: Failed to create system directory nfs Feb 9 18:36:42.258039 kernel: Failed to create system directory nfs Feb 9 18:36:42.223000 audit[3560]: AVC avc: denied { confidentiality } for pid=3560 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.268886 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 18:36:42.223000 audit[3560]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad4fadb40 a1=ae35c a2=aaaac5ebe028 a3=aaaad4c13010 items=0 ppid=62 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.223000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D0066732D6E6673 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.296171 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.296227 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.296259 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.296277 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.296294 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.297061 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.297110 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.297942 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.297981 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.299155 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.299201 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.299222 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.300019 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.300898 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.300918 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.300947 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.302214 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.302235 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.302263 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.303078 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.303142 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.303913 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.303963 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.305162 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.305186 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.305995 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.306022 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.306875 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.306896 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.306913 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.308185 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.308230 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.308249 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.308971 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.309010 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.310224 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.310252 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.310275 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.311073 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.311105 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.311899 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.311934 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.313249 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.313314 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.313334 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.314103 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.314278 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.315292 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.315965 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.316024 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.316040 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.316116 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.316982 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.317027 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.318295 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.318343 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.318370 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.318925 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.318959 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.320150 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.320186 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.320206 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.320984 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.321026 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.322218 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.322254 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.322275 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.323069 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.323098 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.323886 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.323914 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.325109 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.325148 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.325167 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.325925 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.325954 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.327148 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.327178 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.327197 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.327980 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.328009 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.329250 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.329291 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.329306 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.330082 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.330113 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.330894 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.330922 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.332103 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.332140 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.332155 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.332927 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.332967 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.286000 audit[3567]: AVC avc: denied { confidentiality } for pid=3567 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.333881 kernel: Failed to create system directory nfs4 Feb 9 18:36:42.439913 kernel: NFS: Registering the id_resolver key type Feb 9 18:36:42.440058 kernel: Key type id_resolver registered Feb 9 18:36:42.440100 kernel: Key type id_legacy registered Feb 9 18:36:42.286000 audit[3567]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=ffff8d61b010 a1=167c04 a2=aaaae304e028 a3=aaaae4256010 items=0 ppid=62 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.286000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D006E66737634 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.452956 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.453066 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.454228 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.454281 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.454915 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.455880 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.455960 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.456940 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.456986 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.457972 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.458017 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.459011 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.459106 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.460112 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.460156 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.461161 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.461221 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.462243 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.462301 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.463347 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.463388 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.463905 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.464943 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.465008 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.447000 audit[3568]: AVC avc: denied { confidentiality } for pid=3568 comm="modprobe" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Feb 9 18:36:42.466005 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.466061 kernel: Failed to create system directory rpcgss Feb 9 18:36:42.447000 audit[3568]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=ffff89ef3010 a1=3e09c a2=aaaabdf5e028 a3=aaaadae53010 items=0 ppid=62 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="modprobe" exe="/usr/bin/kmod" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.447000 audit: PROCTITLE proctitle=2F7362696E2F6D6F6470726F6265002D71002D2D007270632D617574682D36 Feb 9 18:36:42.490158 nfsidmap[3577]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 18:36:42.492981 nfsidmap[3580]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 9 18:36:42.502000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2318 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 18:36:42.502000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2318 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 18:36:42.502000 audit[1]: AVC avc: denied { watch_reads } for pid=1 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2318 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 18:36:42.502000 audit[1294]: AVC avc: denied { watch_reads } for pid=1294 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2318 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 18:36:42.502000 audit[1294]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaaae4efe7f0 a2=10 a3=0 items=0 ppid=1 pid=1294 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.502000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 18:36:42.502000 audit[1294]: AVC avc: denied { watch_reads } for pid=1294 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2318 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 18:36:42.502000 audit[1294]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaaae4efe7f0 a2=10 a3=0 items=0 ppid=1 pid=1294 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.502000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 18:36:42.502000 audit[1294]: AVC avc: denied { watch_reads } for pid=1294 comm="systemd" path="/run/mount/utab.lock" dev="tmpfs" ino=2318 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:object_r:mount_runtime_t:s0 tclass=file permissive=0 Feb 9 18:36:42.502000 audit[1294]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=d a1=aaaae4efe7f0 a2=10 a3=0 items=0 ppid=1 pid=1294 auid=4294967295 uid=500 gid=500 euid=500 suid=500 fsuid=500 egid=500 sgid=500 fsgid=500 tty=(none) ses=4294967295 comm="systemd" exe="/usr/lib/systemd/systemd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.502000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D64002D2D75736572 Feb 9 18:36:42.749543 kubelet[1561]: E0209 18:36:42.749419 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:42.771280 env[1218]: time="2024-02-09T18:36:42.771225925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f1b594e1-701b-4b6e-aeda-73ffba65019c,Namespace:default,Attempt:0,}" Feb 9 18:36:42.922706 systemd-networkd[1103]: cali5ec59c6bf6e: Link UP Feb 9 18:36:42.924138 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:36:42.924170 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5ec59c6bf6e: link becomes ready Feb 9 18:36:42.926028 systemd-networkd[1103]: cali5ec59c6bf6e: Gained carrier Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.838 [INFO][3583] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-test--pod--1-eth0 default f1b594e1-701b-4b6e-aeda-73ffba65019c 1151 0 2024-02-09 18:36:27 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.94 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.838 [INFO][3583] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.869 [INFO][3597] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" HandleID="k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Workload="10.0.0.94-k8s-test--pod--1-eth0" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.884 [INFO][3597] ipam_plugin.go 268: Auto assigning IP ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" HandleID="k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Workload="10.0.0.94-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a20e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.94", "pod":"test-pod-1", "timestamp":"2024-02-09 18:36:42.869466381 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.884 [INFO][3597] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.884 [INFO][3597] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.884 [INFO][3597] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.886 [INFO][3597] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.894 [INFO][3597] ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.898 [INFO][3597] ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.901 [INFO][3597] ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.904 [INFO][3597] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.904 [INFO][3597] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.906 [INFO][3597] ipam.go 1682: Creating new handle: k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9 Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.910 [INFO][3597] ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.915 [INFO][3597] ipam.go 1216: Successfully claimed IPs: [192.168.24.5/26] block=192.168.24.0/26 handle="k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.915 [INFO][3597] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.5/26] handle="k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" host="10.0.0.94" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.915 [INFO][3597] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.915 [INFO][3597] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.24.5/26] IPv6=[] ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" HandleID="k8s-pod-network.8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Workload="10.0.0.94-k8s-test--pod--1-eth0" Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.918 [INFO][3583] k8s.go 385: Populated endpoint ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f1b594e1-701b-4b6e-aeda-73ffba65019c", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:42.935700 env[1218]: 2024-02-09 18:36:42.919 [INFO][3583] k8s.go 386: Calico CNI using IPs: [192.168.24.5/32] ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Feb 9 18:36:42.936416 env[1218]: 2024-02-09 18:36:42.920 [INFO][3583] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Feb 9 18:36:42.936416 env[1218]: 2024-02-09 18:36:42.924 [INFO][3583] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Feb 9 18:36:42.936416 env[1218]: 2024-02-09 18:36:42.924 [INFO][3583] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"f1b594e1-701b-4b6e-aeda-73ffba65019c", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"b2:3d:d0:5c:82:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:36:42.936416 env[1218]: 2024-02-09 18:36:42.934 [INFO][3583] k8s.go 491: Wrote updated endpoint to datastore ContainerID="8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Feb 9 18:36:42.949735 env[1218]: time="2024-02-09T18:36:42.949664767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:36:42.949735 env[1218]: time="2024-02-09T18:36:42.949709566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:36:42.947000 audit[3631]: NETFILTER_CFG table=filter:105 family=2 entries=42 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:36:42.947000 audit[3631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20268 a0=3 a1=ffffe1e89470 a2=0 a3=ffffb132efa8 items=0 ppid=2352 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.947000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:36:42.950113 env[1218]: time="2024-02-09T18:36:42.949733566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:36:42.950244 env[1218]: time="2024-02-09T18:36:42.950210800Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9 pid=3632 runtime=io.containerd.runc.v2 Feb 9 18:36:42.993685 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 18:36:43.016769 env[1218]: time="2024-02-09T18:36:43.016718869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f1b594e1-701b-4b6e-aeda-73ffba65019c,Namespace:default,Attempt:0,} returns sandbox id \"8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9\"" Feb 9 18:36:43.018539 env[1218]: time="2024-02-09T18:36:43.018486088Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 18:36:43.330994 env[1218]: time="2024-02-09T18:36:43.330884556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:43.336947 env[1218]: time="2024-02-09T18:36:43.336914487Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:43.338413 env[1218]: time="2024-02-09T18:36:43.338371230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:43.340590 env[1218]: time="2024-02-09T18:36:43.340558525Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:36:43.341257 env[1218]: time="2024-02-09T18:36:43.341226598Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 18:36:43.343589 env[1218]: time="2024-02-09T18:36:43.343160896Z" level=info msg="CreateContainer within sandbox \"8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 18:36:43.354182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827596574.mount: Deactivated successfully. Feb 9 18:36:43.355142 env[1218]: time="2024-02-09T18:36:43.355109359Z" level=info msg="CreateContainer within sandbox \"8f0cdb826f18c5ee3cefa6f4a3dfa1a9252b9da3661e189262065912400b67f9\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6fdd14363b83703a88120b4d2377ba6746fa65137388c78099c79ca32474597c\"" Feb 9 18:36:43.355498 env[1218]: time="2024-02-09T18:36:43.355477595Z" level=info msg="StartContainer for \"6fdd14363b83703a88120b4d2377ba6746fa65137388c78099c79ca32474597c\"" Feb 9 18:36:43.407649 env[1218]: time="2024-02-09T18:36:43.407356561Z" level=info msg="StartContainer for \"6fdd14363b83703a88120b4d2377ba6746fa65137388c78099c79ca32474597c\" returns successfully" Feb 9 18:36:43.750482 kubelet[1561]: E0209 18:36:43.750364 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:44.013193 kubelet[1561]: I0209 18:36:44.013149 1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337201984166e+09 pod.CreationTimestamp="2024-02-09 18:36:27 +0000 UTC" firstStartedPulling="2024-02-09 18:36:43.017928775 +0000 UTC m=+73.593612867" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:36:44.01281508 +0000 UTC m=+74.588499212" watchObservedRunningTime="2024-02-09 18:36:44.013116997 +0000 UTC m=+74.588801129" Feb 9 18:36:44.325998 systemd-networkd[1103]: cali5ec59c6bf6e: Gained IPv6LL Feb 9 18:36:44.750987 kubelet[1561]: E0209 18:36:44.750746 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:45.751507 kubelet[1561]: E0209 18:36:45.751469 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:46.752383 kubelet[1561]: E0209 18:36:46.752338 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:47.753269 kubelet[1561]: E0209 18:36:47.753225 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:48.753536 kubelet[1561]: E0209 18:36:48.753491 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 18:36:49.754427 kubelet[1561]: E0209 18:36:49.754389 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"